Logging is another area which has seen little evolution. Most of the renowned products are still dependent on text file based logging. For bigger products, each module produces its own log. Some designers prefer cumulative logs, while others go for time-stamped log files with cryptic names. You will be lucky if you come across a single centralized folder where all the logs are stored.

Our customers definitely deserve a better deal. And there are better and more ethical ways of generating revenue than by hiding or obfuscating logs.

I always use a centralized, database-driven logging. On the rare occasion the product I am building does not have a database, I use a Sqlite file as a substitute. The ER diagram follows:

In table LOG_HDR has an artificially generated primary key called "log_hdr_id". Transaction ID is stored in trans_id. This is to be generated by the transaction handling module. Any user interaction with the system requiring a server call should be treated as a transaction.

Usually the database session id is enough for maintaining uniqueness. The columns msg_types and msg_sum are summation columns. After inserting into LOG_DTL, the program should loop back and put the calculated fields into those two columns. Set msg_type=severity_level and msg_sum to concatenation of all msg_text against the same log_hdr_id.

In the detail table - LOG_DTL - log_hdr_id is the foreign key to LOG_HDR. The rest of the columns are self-explanatory. Just keep in mind that severity_level of 1 signifies topmost priority.

Make writing to those two tables from within your program a separate thread. There are two good reasons for that.

1. The process is faster. You do not wait for I/O.

2. You will have no clue if the process of writing those two tables itself fails.

Also, I usually keep auto-commit on to make things even faster.