In an earlier post, we saw how Oracle’s UNDO works in very basic terms. REDO is another critical aspect that goes hand in hand with UNDO. Let us get the basics of REDO sorted out here.
UNDO and REDO
In its most basic sense
- UNDO is information saved to get back in case a ROLLBACK was issued
- REDO is information saved to make the change happen again at any time (ROLL FORWARD)
To understand REDO, one has to at least get a glace at the memory structures of an Oracle instance. From the beginning days of our IT careers, we understand that databases “buffer” data (buffering as in the taking data from the disk and keeping it readily available in memory to serve it out fast).
Oracle’s memory structure, which is shown below has different buffers.
** Image from Oracle Docs
Role of REDO – A Camcorder of data changes!
“Buffer cache” in the System Global Area (SGA) is where the “database data” as we know of it is cached. Redo is data too but a different kind of data. What kind is it?
REDO is like a camcorder for data changes. It records all changes made to data in the form of “REDO entries” so that if the database were to go back to an older state for any reason (like media failure), the data in the REDO can be used to reconstruct the changes.
Data’s journey via REDO
Let us get some basic doubts out of the way before jumping ahead of ourselves.
Question: If REDO is in the memory, how is it going to protect my changes if I abruptly power off the machine?
REDO goes through several stages beyond what one sees above in the memory structure image.
To illustrate, let us consider an example with the big picture in mind.
- An employee record is updated to increase the salary (not committed yet)
- UNDO information – i.e., the “before” picture of the data is kept track of in the “Buffer Cache”. This will be used if a ROLLBACK was issued.
- Changes are made to the data blocks for the table (and index blocks if applicable) in the “Buffer Cache”
- The change (both #2 and #3 above) gets recorded in the REDO buffer – i.e., change vector (not the change itself) is buffered in the memory
- If may appear that the changes are kept track of redundantly but it will become apparent later on as to why Oracle does this
- When a commit is issued, the transaction is assigned an SCN (System Change Number)
- A background process called LGWR (Log Writer), writes the changes the active “REDO Log File” (disk file)
- Only after the changes were persisted to the redo log successfully, does the control return back to the user who did the commit
- Separately, another background process named DBWR (DB Writer), writes the data/index block changes to the data files when “Buffer Cache” becomes full
- This does not happen until the LGWR finishes its work
- This does not have to complete for the control to return back to the session after a COMMIT. The data is already safe in redo logs!
- Undo data is also written out from the Buffer Cache to the UNDO segments and kept ACTIVE until it can be safely EXPIRED
- To support flashback queries. i.e., queries that ask for data as of a point in time in the past
- When “pre change” data is being actively referenced by queries issued before this transaction started
The picture below illustrates how, from the memory (SGA), it gets transferred over by LGWR to online REDO log file
** Thanks again for the image Oracle Docs
A database has two or more redo log files. At any point in time, only one of them will be active. When the active one gets full, LGWR round robins through the set of all available online redo log files (log file switch). There are more than one files to reduce contention since if there was a single redo log file, it will become the point of contention and hence the bottleneck.
Extra protection with Archive logs
The journey for redo data does not stop there. One has to consider the question – “Sure but are two or three redo log files enough to protect all my changes?”. The answer depends on “how much do you care about protecting the changes”. If you care a lot, then you put your database in ArchiveLog mode.
In ArchiveLog mode, before LGWR overwrites the contents of an online redo log file, another background process named ARChiver archives off the contents to what are a separate set of files named archived log files. This picture illustrates that
** Copying an image from Oracle for a 3rd time!
If the database is in NOARCHIVELOG mode, which is the other choice, the online redo logs will get overwritten.
If a Media failure occurs, and if the database was in NOARCHIVELOG, the latest full backup is the only recourse. However if the database was in ARCHIVELOG mode, the archived log files can be used to reconstruct or reapply the recorded changes to the latest available backup and get back to the most current picture.
Here is another peculiar but true fact – REDO covers UNDO too. Changes to UNDO segments are also recorded as part of redo logs in addition to data block changes thereby protecting rollback data.
Some examples to clarify
Are you even more confused now? Let us consider some basic scenarios and see how the different but what appears to be redundant pieces come into play
- The user does a ROLLBACK right away after the UPDATE
The data in the UNDO is used to get back
- The instance is poweredoff or crashes before COMMIT was issued
Uncommitted data was not persisted to data files, so it is not a problem
- A COMMIT was issued and everything is well but a few minutes later Oracle instance failed
The data in the redo log files will be used to reconstruct the changes that were not yet applied to the data files
- Everything is fine but media failed after a day and the last full backup was from last week
The data in the archived redo log files will be used to reconstruct
Something worthy of noting is that when the database is in NOARCHIVELOG mode, we could not have done #4. The same is true if your table has NOLOGGING. In other words, if there was a media failure, the data in the table with NOLOGGING is only as good as the contents of the table in the last full backup. Please use the NOLOGGING feature wisely as in the case of stage tables or tables with transient data that is not important.
When I was first getting started with Oracle and tried to understand these concepts, they were very difficult for me to comprehend in a practical manner. At the time, I could not piece together how things clicked. I slowly got it. Hopefully, this simple write up will help you.