2012/12/26
SQL Server Table Schema 查詢與欄位定序
/*資料表*/
Select * From SysObjects Where xType='U' Order By Name
/*欄位*/
Select * From SysObjects A Inner Join SysColumns B On A.ID=B.ID Where A.xType='U' Order By A.Name,ColID
/*讀取SQL 資料表欄位結構的SQL 語法*/
Select A.Name As TableName,B.ColOrder As ColOrder,B.Name As ColName,C.Name As ColType,B.Length As ColLen,B.XPrec As ColPrecision,B.XScale As ColScale
From (SysObjects A Inner Join SysColumns B On A.ID=B.ID) Inner Join SysTypes C On B.XType=C.XType
Where A.XType='u'
Order By A.Name,B.ColOrder
/*修改欄位定序*/
ALTER TABLE MyTable ALTER COLUMN CharCol
varchar(10) COLLATE Chinese_Taiwan_Stroke
2012/07/23
Install PostgreSQL 8.2.23 in CentOS 5.8
- Download source files from here
- Upload files to the CentOS server
- Prepare the necessary files
- #yum -y install gcc gcc-c++ autoconf libjpeg libjpeg-devel libpng libpng-devel freetype freetype-devel libxml2 libxml2-devel zlib zlib-devel glibc glibc-devel glib2 glib2-devel bzip2 bzip2-devel ncurses ncurses-devel curl curl-devel e2fsprogs e2fsprogs-devel krb5 krb5-devel libidn libidn-devel openssl openssl-devel openldap openldap-devel nss_ldap openldap-clients openldap-servers
- #tar -zxf postgresql-8.2.23.tar.gz
- #cd postgresql*
- #./configure --prefix=/usr/local/pgsql --without-readline
- #make && make install
- Setting PostgreSQL config process
- create user: #adduser postgres
- create password: #passwd postgres
- Authorized folder/files:
- #mkdir /usr/local/pgsql/data
- #chown -R postgres /usr/local/pgsql
- login as postgres: su postgres
- initial dataset: /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
- edit config
- #vi /usr/local/pgsql/data/postgresql.conf
- find listen_addresses = ‘localhost’ to listen_addresses = '*'
- #vi /usr/local/pgsql/data/pg_hba.conf
- IPv4 local connections: host all all 127.0.0.1/32 trust
- Update user env. parameters
- #vim /var/lib/pgsql/.bash_profile
- [ -f /etc/profile ] && source /etc/profile
- PGDATA=/usr/local/pgsql/dataexport PATH=/usr/local/pgsql/bin:$PATHexport PGDATA
- Start the service
- start SQL service:
- #su - postgres
- #/usr/local/pgsql/bin/pg_ctl start -i -D /usr/local/pgsql/data
- check SQL status: ps aux | grep postgres
- update postgres user
- #su - postgres
- #psql postgres
- #alter user postgres ENCRYPTED PASSWORD 'postgres';
- host all all 192.168.1.0/24 password
Note that: you have a short cut to figure out:
command line: yum -y install postgresql postgresql-libs postgresql-server
2009/07/14
Dealing with a large transaction log file
Transaction log principles
Whenever a data update is made entries are added to the transaction log.
It is not possible to prevent this as it is part of the way sql server maintains integrity - particularly during recovery.
The transaction log is a circular file i.e. when the end is reached any free entries at the start will be used.
This means that all being well the file will stay at a constant size as the current entry cycles round.
The system maintains the MinLSN which is a pointer to the first active log record.
Any log records before this (in the circular file) are free.
The MinLSN will be prevented from moving forward by any open transactions - i.e. the oldest open transaction entry will be >= the MinLSN.
The MinLSN is updated at checkpoint so committing a transaction will not immediately free entries and anything that holds up the checkpoint can cause problems.
If the database is in simple recovery mode all entries prior to the MinLSN will be freed at checkpoint.
If the database is in full recovery mode (and a full backup has been taken) the entries prior to the MinLSN will only be freed by a transaction log backup (not full backup).
Common cause of large transaction log file (.ldf)
Unfortunately the sql server default (except local editions) leaves the databases in full recovery mode.
This menas that if no action is taken no tr log entries will be freed and the log file will eventally fill the disk and crash the system.
The SQL Server installation process is very simple and commonly carried out by inexperienced personel. This will appear to work happily but cause problems later.
I would recommend always setting the model database to simple recovery mode to set the default for new databases.
Stopping the transaction log file (.ldf) from growing
If the log file has grown do to being in full recovery mode then set it to simple before going any further. This should immediately stop the log from growing.
Enterprise manager
Right click on the database, properties, Options, set model to simple, OK.
t-sql
sp_dboption [dbname], 'trunc. log on chkpt.', 'true'
Shrinking the transaction log file (.ldf)
Before this make sure there are free entries by setting the recovery model to simple or backing up the log.
Enterprise manager
Right click on the database, All tasks, Shrink database, Files, Select log file, OK.
t-sql
dbcc shrinkfile ([db_log_name])
Here [db_log_name] is the logical name of the log file as found from sp_helpdb or the table sysfiles
Shrinking the log file via detach/attach
Always take a full backup before a detach.
Detach the database, delete/rename the log file, attach the database - this will create a minimum size log file.
Note that the log file must be deleted/renamed otherwise it will be re-used even though it is not mentioned in the attach.
Enterprise manager
Right click on the database, All tasks, Detach database, OK.
Delete/rename the disk log file.
Right click on databases, All tasks, Attach database, Select the .mdf file, OK, Yes (to the create new log message).
t-sql
sp_detach_db [dbname]
Delete/rename the disk log file.
sp_attach_single_file_db [dbname], [filename]
where [filename] is the name of the physical data file (.mdf).
[轉貼於這裡]
Whenever a data update is made entries are added to the transaction log.
It is not possible to prevent this as it is part of the way sql server maintains integrity - particularly during recovery.
The transaction log is a circular file i.e. when the end is reached any free entries at the start will be used.
This means that all being well the file will stay at a constant size as the current entry cycles round.
The system maintains the MinLSN which is a pointer to the first active log record.
Any log records before this (in the circular file) are free.
The MinLSN will be prevented from moving forward by any open transactions - i.e. the oldest open transaction entry will be >= the MinLSN.
The MinLSN is updated at checkpoint so committing a transaction will not immediately free entries and anything that holds up the checkpoint can cause problems.
If the database is in simple recovery mode all entries prior to the MinLSN will be freed at checkpoint.
If the database is in full recovery mode (and a full backup has been taken) the entries prior to the MinLSN will only be freed by a transaction log backup (not full backup).
Common cause of large transaction log file (.ldf)
Unfortunately the sql server default (except local editions) leaves the databases in full recovery mode.
This menas that if no action is taken no tr log entries will be freed and the log file will eventally fill the disk and crash the system.
The SQL Server installation process is very simple and commonly carried out by inexperienced personel. This will appear to work happily but cause problems later.
I would recommend always setting the model database to simple recovery mode to set the default for new databases.
Stopping the transaction log file (.ldf) from growing
If the log file has grown do to being in full recovery mode then set it to simple before going any further. This should immediately stop the log from growing.
Enterprise manager
Right click on the database, properties, Options, set model to simple, OK.
t-sql
sp_dboption [dbname], 'trunc. log on chkpt.', 'true'
Shrinking the transaction log file (.ldf)
Before this make sure there are free entries by setting the recovery model to simple or backing up the log.
Enterprise manager
Right click on the database, All tasks, Shrink database, Files, Select log file, OK.
t-sql
dbcc shrinkfile ([db_log_name])
Here [db_log_name] is the logical name of the log file as found from sp_helpdb or the table sysfiles
Shrinking the log file via detach/attach
Always take a full backup before a detach.
Detach the database, delete/rename the log file, attach the database - this will create a minimum size log file.
Note that the log file must be deleted/renamed otherwise it will be re-used even though it is not mentioned in the attach.
Enterprise manager
Right click on the database, All tasks, Detach database, OK.
Delete/rename the disk log file.
Right click on databases, All tasks, Attach database, Select the .mdf file, OK, Yes (to the create new log message).
t-sql
sp_detach_db [dbname]
Delete/rename the disk log file.
sp_attach_single_file_db [dbname], [filename]
where [filename] is the name of the physical data file (.mdf).
[轉貼於這裡]
2008/09/11
MS SQL Server 2005 錯誤 15023
sql server中"登錄"與"用戶"的區別,"登錄"用於用戶身份驗證,而資料庫"用戶"帳戶用於資料庫訪問和許可權驗證。登錄通過安全識別符 (SID) 與用戶關聯。將資料庫恢復到其他伺服器時,資料庫中包含一組用戶和許可權,但可能沒有相應的登錄或者登錄所關聯的用戶可能不是相同的用戶。這種情況被稱為存在"獨立用戶"。 此時是不能通過新建登錄或者是對同名登錄授予對應資料庫的“用戶”許可權來解決登錄問題,因為SQL Server會報出"錯誤15023:當前資料庫中已存在用戶或角色",為了解決這個問題,需要調用系統存儲過程sp_change_users_login,具體用法如下:
Use Northwind
go
sp_change_users_login 'update_one', 'test', 'test'
其中
1.Northwind為存在獨立用戶的資料庫,
2.update_one是存儲過程的參數,表示只處理一個用戶,
3.第一個test是"用戶",
4.後一個test是"登錄",
以上這個SQL表示將伺服器登錄"test"與 Northwind 資料庫用戶"test"重新起來。這樣就可以正常使用資料庫了。
Use Northwind
go
sp_change_users_login 'update_one', 'test', 'test'
其中
1.Northwind為存在獨立用戶的資料庫,
2.update_one是存儲過程的參數,表示只處理一個用戶,
3.第一個test是"用戶",
4.後一個test是"登錄",
以上這個SQL表示將伺服器登錄"test"與 Northwind 資料庫用戶"test"重新起來。這樣就可以正常使用資料庫了。
2008/08/18
MS SQL 定序議題 (可區分資料庫中大小寫等...)
排序順序選項
有數個排序順序選項可以套用到指定的 Analysis Services Windows 定序,包括依據大小寫、腔調字、假名和區分全半形等,以進一步定義排序和比較規則。下表描述 Windows 定序排序順序選項和 Analysis Services 之相關聯的後置詞。
1 如果選取 BIN2,則無法使用區分大小寫、不區分大小寫、區分腔調字、不區分腔調字、區分假名和區分全半形等選項。
每一個 Windows 定序會與排序順序後置詞相結合,來定義大小寫、腔調字、全半形或假名的區分。例如,Analysis Services 之 Collation 組態屬性的預設值是 Latin1_General_AS_CS,它指定使用 Latin1_General 定序,以及區分腔調字、區分大小寫的排序順序。
[註] : 轉貼自 MS SQL Server 2008 線上叢書
排序順序 (後置詞) | 排序順序描述 | ||
---|---|---|---|
二進位 (_BIN)1 | 依據每一個字元已定義的位元模式來排序和比較 Analysis Services 中的資料。二進位排序順序為區分大小寫和區分腔調字。二進位也是最快的排序順序。如需詳細資訊,請參閱<使用 BIN 和 BIN2 定序的指導方針>。 如果沒有選取此選項,Analysis Services 會遵循相關聯之語言或字母字典中所定義的排序和比較規則。 此選項對應到 [Microsoft SQL Server 安裝精靈] 之 [定序設定] 頁面上的 [二進位] 選項,或 SQL Server Management Studio 中之 [Analysis Server 屬性] 對話方塊的 [語言/定序] 頁面。 | ||
BIN2 (_BIN2)1 | 依據 Unicode 資料的 Unicode 字碼指標來排序和比較 Analysis Services 中的資料。針對非 Unicode 資料,BIN2 將使用與二進位排序相同的比較。 使用 BIN2 排序順序的優點,就是比較已排序資料的應用程式並不需要重新排序資料。因此,BIN2 提供更簡化的應用程式開發和可能的效能提升。如需詳細資訊,請參閱<使用 BIN 和 BIN2 定序的指導方針>。 此選項對應到 [Microsoft SQL Server 安裝精靈] 之 [定序設定] 頁面上的 [二進位 2] 選項,或 SQL Server Management Studio 中之 [Analysis Server 屬性] 對話方塊的 [語言/定序] 頁面。 | ||
區分大小寫 (_CS) | 區分大寫和小寫字母。如果選取此選項,則小寫字母的排序會先於大寫字母。 此選項的設定方式是選取 [Microsoft SQL Server 安裝精靈] 之 [定序設定] 頁面上的 [區分大小寫] 選項,或 SQL Server Management Studio 中之 [Analysis Server 屬性] 對話方塊的 [語言/定序] 頁面。 | ||
不區分大小寫 (_CI) | 不區分大寫和小寫字母。在排序用途上,Analysis Services 視大寫和小寫字母相同。 此選項的設定方式是清除 [Microsoft SQL Server 安裝精靈] 之 [定序設定] 頁面上的 [區分大小寫] 選項,或 SQL Server Management Studio 中之 [Analysis Server 屬性] 對話方塊的 [語言/定序] 頁面。 | ||
區分腔調字 (_AS) | 區分有腔調和無腔調的字元。例如,'a' 不等於 'ấ'。 如果未選取此選項,在排序用途上,Analysis Services 視有腔調和無腔調字母相同。 此選項對應到 [Microsoft SQL Server 安裝精靈] 之 [定序設定] 頁面上的 [區分腔調字] 選項,或 SQL Server Management Studio 中之 [Analysis Server 屬性] 對話方塊的 [語言/定序] 頁面。 | ||
不區分腔調字 (_AI) | 不區分有腔調和無腔調字元。在排序用途上,Analysis Services 視有腔調和無腔調字母相同。 此選項的設定方式是清除 [Microsoft SQL Server 安裝精靈] 之 [定序設定] 頁面上的 [區分腔調字] 選項,或 SQL Server Management Studio 中之 [Analysis Server 屬性] 對話方塊的 [語言/定序] 頁面。 | ||
區分假名 (_KS) | 區分兩種類型的日文假名字元:平假名與片假名。 如果未選取此選項,在排序用途上,Analysis Services 視平假名和片假名相同。
| ||
區分全半形 (_WS) | 區分單一位元組字元和以雙位元組字元表示的相同字元。 如果未選取此選項,在排序用途上,Analysis Services 視相同字元的單一位元組和雙位元組表示法相同。
|
每一個 Windows 定序會與排序順序後置詞相結合,來定義大小寫、腔調字、全半形或假名的區分。例如,Analysis Services 之 Collation 組態屬性的預設值是 Latin1_General_AS_CS,它指定使用 Latin1_General 定序,以及區分腔調字、區分大小寫的排序順序。
[註] : 轉貼自 MS SQL Server 2008 線上叢書
2007/06/23
Call Stored Procedure
不同資料庫的Stored Procedure的呼叫方式應該是大同小異,至少透過JDBC來呼叫更應該如此吧?! 但是,我只在Oracle 9.2上試過,所以,以下的方法只保證在Oracle 9.2下執行是正確的。
假設目前有stored procedure - gpv(empId, password)
它的功能是傳入員工編號 (empId),傳回加密的密碼,並且,當執行成功時,會傳回0,失敗時會傳回錯誤代碼,傳回值的型別是整數,那麼程式該怎麼寫?
con = pool.getConnection(); // 由Connection Pool取得Connection
sql = "{ ? = call gpv(?, ?) }"; // sql statement,詳見說明A
cstmt = con.prepareCall(sql); // cstmt的型別是CallableStatement
cstmt.registerOutParameter(1, Types.INTEGER); // 說明B
cstmt.setString(2, emp_id); // 說明C
cstmt.registerOutParameter(3, Types.LONGVARCHAR);
cstmt.execute(); // 執行
ret = cstmt.getInteger(1);
pwd = cstmt.getString(3).trim(); //說明D
假設目前有stored procedure - gpv(empId, password)
它的功能是傳入員工編號 (empId),傳回加密的密碼,並且,當執行成功時,會傳回0,失敗時會傳回錯誤代碼,傳回值的型別是整數,那麼程式該怎麼寫?
con = pool.getConnection(); // 由Connection Pool取得Connection
sql = "{ ? = call gpv(?, ?) }"; // sql statement,詳見說明A
cstmt = con.prepareCall(sql); // cstmt的型別是CallableStatement
cstmt.registerOutParameter(1, Types.INTEGER); // 說明B
cstmt.setString(2, emp_id); // 說明C
cstmt.registerOutParameter(3, Types.LONGVARCHAR);
cstmt.execute(); // 執行
ret = cstmt.getInteger(1);
pwd = cstmt.getString(3).trim(); //說明D
- 說明A:這是呼叫stored procedure的途述,如上是當有傳回值時,如果沒有傳回值要寫成{ call gpv(?, ?) }
- 說明B:如果參數將會傳回值的話,必須註冊。
- 說明C:當參數只傳入不傳出時,如此設定。
- 說明D:傳回值是第一個參數,傳回的參數是第三個參數。
2007/03/28
DataBase - 存取檔案至資料庫 (mssql2000)
環境:
DB : mssql 2000
Table schema :
id int,
fileContent image,
path varchar(255);
從檔案加入至資料庫
code:
try {
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
Connection con = DriverManager.getConnection("jdbc:sqlserver://127.0.0.1:1433", "cf-intermediate",
"cf-intermediate");
File file = new File("C:/MyEclipse5.1/workspace/myJava/abc.doc");
BufferedInputStream bufferedInputStream = new BufferedInputStream(new FileInputStream(file));
// 將檔案讀入位元陣列
ByteArrayOutputStream arrayOutputStream = new ByteArrayOutputStream();
byte[] bytes = new byte[1];
while (bufferedInputStream.read(bytes) != -1) {
arrayOutputStream.write(bytes);
}
arrayOutputStream.close();
bufferedInputStream.close();
bytes = arrayOutputStream.toByteArray();
System.out.println("bytes="+bytes.length);
PreparedStatement preState = con.prepareStatement("insert into erpglm_append values(?,?,?)");
preState.setBytes(1, bytes);
preState.setString(2, file.getPath());
preState.setString(3, "N");
preState.execute();
con.close();
} catch (Exception e) {
e.printStackTrace();
}
從資料庫取得資料並存成檔案:
try {
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
Connection con = DriverManager.getConnection("jdbc:sqlserver://127.0.0.1:1433", "cf-intermediate",
"cf-intermediate");
String sql = "select fileContent from erpglm_append where id=1";
Statement stat = con.createStatement();
ResultSet rs = stat.executeQuery(sql);
byte[] bytes = null;
while (rs.next()) {
bytes = (byte[]) rs.getObject("fileContent");
}
rs.close();
stat.close();
con.close();
System.out.println("bytes="+bytes.length);
BufferedInputStream bufferedInputStream = new BufferedInputStream(new ByteArrayInputStream(bytes));
File file = new File("C:/MyEclipse5.1/workspace/myJava/abc1.doc");
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(new FileOutputStream(file));
byte[] data = new byte[1];
while (bufferedInputStream.read(data) != -1) {
bufferedOutputStream.write(data);
}
bufferedOutputStream.flush();
bufferedInputStream.close();
bufferedOutputStream.close();
} catch (Exception e) {
e.printStackTrace();
}
DB : mssql 2000
Table schema :
id int,
fileContent image,
path varchar(255);
從檔案加入至資料庫
code:
try {
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
Connection con = DriverManager.getConnection("jdbc:sqlserver://127.0.0.1:1433", "cf-intermediate",
"cf-intermediate");
File file = new File("C:/MyEclipse5.1/workspace/myJava/abc.doc");
BufferedInputStream bufferedInputStream = new BufferedInputStream(new FileInputStream(file));
// 將檔案讀入位元陣列
ByteArrayOutputStream arrayOutputStream = new ByteArrayOutputStream();
byte[] bytes = new byte[1];
while (bufferedInputStream.read(bytes) != -1) {
arrayOutputStream.write(bytes);
}
arrayOutputStream.close();
bufferedInputStream.close();
bytes = arrayOutputStream.toByteArray();
System.out.println("bytes="+bytes.length);
PreparedStatement preState = con.prepareStatement("insert into erpglm_append values(?,?,?)");
preState.setBytes(1, bytes);
preState.setString(2, file.getPath());
preState.setString(3, "N");
preState.execute();
con.close();
} catch (Exception e) {
e.printStackTrace();
}
從資料庫取得資料並存成檔案:
try {
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
Connection con = DriverManager.getConnection("jdbc:sqlserver://127.0.0.1:1433", "cf-intermediate",
"cf-intermediate");
String sql = "select fileContent from erpglm_append where id=1";
Statement stat = con.createStatement();
ResultSet rs = stat.executeQuery(sql);
byte[] bytes = null;
while (rs.next()) {
bytes = (byte[]) rs.getObject("fileContent");
}
rs.close();
stat.close();
con.close();
System.out.println("bytes="+bytes.length);
BufferedInputStream bufferedInputStream = new BufferedInputStream(new ByteArrayInputStream(bytes));
File file = new File("C:/MyEclipse5.1/workspace/myJava/abc1.doc");
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(new FileOutputStream(file));
byte[] data = new byte[1];
while (bufferedInputStream.read(data) != -1) {
bufferedOutputStream.write(data);
}
bufferedOutputStream.flush();
bufferedInputStream.close();
bufferedOutputStream.close();
} catch (Exception e) {
e.printStackTrace();
}
訂閱:
文章 (Atom)
NGINX SSL/設定檔案
#user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #...
-
Source: Automatically Retry Failed Jobs in Quartz Retrying continuously until success: If you want to keep trying over and over again un...
-
*主餐類 漢堡王炸雞腿-227大卡 華嫩雞條(6PCS)-259大卡 火烤漢堡-311大卡 火烤吉士漢堡-353大卡 小華堡-397大卡 華雪魚-471大卡 雙層吉士漢堡-534大卡 雙層燒烤培根堡-580大卡 華辣雞腿堡-597大卡 華香雞排堡-685大卡...
-
Download source files from here Upload files to the CentOS server Prepare the necessary files #yum -y install gcc gcc-c++ autoconf lib...