Situation:
Our log cleanup is failing every night. We looked at the table and found that, for instance, AU_TRANSACTIONDATA, has 4.5 million records. Apparently the log cleanup was not running for some time so the tables are huge.
In order to "catch up" again we'd like to just truncate the log tables. Is there any side effects or dependencies that we're not aware of, or is it ok to truncate these tables?
Recommendation:
We would recommend correcting the issue with the log cleanup and allowing the job to run normally (default is nightly at 8PM). The amount records is fairly large, but our other customers are cleaning up larger numbers of records daily without an issue.
Solution:
If you'll perform the Truncate on the Log Database then all of the logs will be deleted from the database and you wouldn’t be able to see any previous logs and new logs will be created.
If you want to retain the logs for say 2 days then you can use Delete Query to delete the logs older then specified date. The data can be deleted by creating queries for tables where the date is less than specified date not more than 5 days.(Date time will be in milliseconds)
Below is the list of tables on which delete queries needs to be performed and their corresponding column names on which the data can be cleaned on date time basis by applying where condition:
au_log
Column Name: AU_LOGDATE
au_log_archive
Column Name: AU_LOGDATE
au_transactiondata
Column Name: AU_ACTIVITYENDTIME
au_transactiondata_archive
Column Name: AU_ACTIVITYENDTIME
au_transactionlog
Column Name: AU_LOGDATE
au_transactionlog_archive
Column Name: AU_LOGDATE
Example Query:
delete from au_log where AU_LOGDATE <=1401220800000;
To calculate date-time in Java milliseconds, kindly, use link: https://currentmillis.com/
Comments
0 comments
Article is closed for comments.