Friday, November 25, 2011

How Often Redo Log file should switch ?


Redo log file switch has good impact on the performance of the database. Frequent log switches may lead to the slowness of the database .If the log file switches after long times then there may be chances  of lossing data when the redo log file get corrupt . Oracle documents suggests to resize the redolog files so that log switches happen more like every 15-30 min (roughly depending on the architecture and recovery requirements). 

But what happen when there in bulk load ?? since we cannot resize the redolog file size every time because it's  seems to be silly. Generally we donot load the data in bulk on regular basis . it's very often twice or thrice in a week . So what should be the accurate size ?? 

Here is a very good explanation of this question by "howardjr".

One of my database have very large logs which are not intended to fill up under normal operation. They are actually big enough to cope with a peak load we get every week. previously, we had two or three log switches recorded under the one alert log timestamp! Now, they switch every 10 minutes or so, even under the heaviest load. 
So big logs are good for slowing things own under load. But I don't want to sit there with 5 hours of redo sitting in my current log during non-peak-load normal running. Therefore, I set archive_lag_target to 1800 (seconds = 30 minutes), and I know that in the worst possible case, I will only lose 30 minutes of redo.

I see LOADS of advantages for using archive_lag_target even for standalone instances. Actually especially  for standalone instances.  I want logs big enough not to cause rapid log switching. But I have bulk loads. Therefore, I have to have enormous logs to prevent rapid log switching during those times. In fact, on one database I am connected to right now, I have 2GB redo logs which nevertheless manage to switch every 8 minutes on a Friday night. We can imagine the frequency of log switches we had when those logs were originally created at 5MB each! And the number of redo allocation retries.

I'd like 8GB logs to get it down to a log switch every 30 minutes or so on a Friday night, but with multiple members and groups, that's just getting silly.But now I have an enormous log that will take forever and a day to fill up and switch when I'm NOT doing bulk loads. Ordinarily, without a forced log switch, my 2GB log takes 3 days to fill up.

How  FAST_START_MTTR_TARGET   affect the redolog file in case of recovery? 

If I were to have a catastrophic hardware failure, I could lose my current redo log. Fast_start_mttr_target can't do anything to ameliorate that loss: flushing the dirty buffers to disk regularly doesn't protect my data, actually. In fact, there is no way to recover transactions that are sitting in the current redo log if that log is lost. Therefore, having an enormous log full of hours and hours (in my case, about 72 hours'-worth) of redo is a massive data loss risk, and not one I'm prepared to take.forcing log switches is a good thing for everyone to be able to do, when appropriate, even if they're not using Data Guard and standby databases.

That huge log files are necessary. That a forced log switch is essential thereafter to data security. We can certainly try to minimise the risk: that's what redo log multiplexing is all about. But if we lose all copies of your current log, then we have lost the only copy of that redo, and that means we have lost data. 

Frequent checkpoints can help minimise the amount of redo that is vulnerable to loss, but they do nothing to minimise the risk of that loss occurring. Redundant disks (mirroring), redundant controllers, multiplexing: those are the only things that can help protect the current redo log and thus actually reduce the risk of failure occurring in the first place. Frequent checkpointing simply reduces the damage that the loss of all currrent logs would inevitably cause, but it doesn't (and cannot) reduce it to zero. It's therefore not a protection mechanism at all, in the slightest.

Checkpoints set a limit on potential data loss from redo log damage, absolutely they do. But no matter how frequently we checkpoint, we cannot reduce potential data loss to zero and reducing the potential cost of a disaster should it strike doesn't count as reducing the risk of the disaster happening. Buying car insurance doesn't reduce our risk of having a car accident: it simply means we can pay the bills when the accident eventually happens. Therefore, checkpoints cannot reasonably be called a "current redo logfile protection mechanism" . Mirroring, multiplexing and redundant hardware are the only ways to actually protect the current redo log Safety and performance always have to be traded off against each other, and we cannot realistically propose going for just one or the other without appreciating the impact on the other.


Enjoy    :-) 


4 comments:

Anonymous said...

Wow..........great work...
Thanks..It's very very useful to me..
Keep going..

Unknown said...

Wow..........great work...
Thanks..It's very very useful to me..
Keep going..

Unknown said...

Wow..........great work...
Thanks..It's very very useful to me..
Keep going..

Anonymous said...

I love your blog.. very nice colors & theme. Did you make this
website yourself or did you hire someone to do it for
you? Plz respond as I'm looking to construct my own blog and would like
to find out where u got this from. thank you

Here is my weblog; stairlifts