Partitioning 50TB Oracle Database Whilst Migrating Exadatas
Case Study

Partitioning 50TB Oracle Database Whilst Migrating Exadatas

January 1, 2025
Stephen Alleyn
Databases
Oracle Exadata
Database Migration
2025
Database Services
Oracle

The Requirement

During an Exadata migration project, the client informed us that one of the larger databases which is 50TB in size consisting of large objects (CLOBs and BLOBs). To help the querying of this data, which was extremely slow, even on an Exadata, the client asked us to:

  1. Bring the database forward from 18c to the latest 19c
  2. Partition the data by Month with both data and large objects in the same tablespace as each other
  3. Purge all data older than 7 years
  4. Only have 1 backup of the data and minimise the cost of Commvault licensing
  5. Ensure that the solution was replicated so that if the Primary Exadata became unavailable, they could query the data on the Secondary.
  6. The new partitioned database is to be on the new Exadata only as there was insufficient space on the existing Exadatas.
  7. Only a small outage window was allowed as this database was receiving data constantly from multiple other systems

As you can see, with a database being 50TB+ in size, addressing these requirements is not a simple matter!

The Process

We initially started this process with an internal meeting of multiple senior Corvus IT personnel to discuss how this could be done. Considering the constraints and other project activities occurring in parallel, we had to ensure that we got it right first time and that the larger migration project was not impacted. We then took the client through the process and shared our estimates.

The overall approach we decided was to use Database links from the existing database to the new database, and to have a short outage by migrating the most recently received data. The key to this exercise was to use data transportation methods that did not create any logging or redo's and to maximise throughput.

The steps we took were:

    1. We prepared a new 19c database, the database links allow data from an 18c database direct into a 19c database.
      1. The database was in NOARCHIVE mode to reduce the archive storage requirements and to maximise throughput.
      2. The tablespaces had a NOLOGGING clause as did the tables created
      3. We used the database links to direct load the data which minimised the redo
      4. The database storage was on Exadata's XT Storage which is a lower-cost storage option from Oracle
      5. A Dataguard standby of the database was created on a secondary Exadata with XT Storage
    2. The table partitioning was designed so that each month's partition was in its own Tablespace. This then allowed us to control the backup being of a single tablespace only which is the lowest level of granularity for backups with Oracle RMAN
    3. We did not reproduce any constraints from the source database as this would slow the migration process.
    4. We used the database links to insert data from the non-partitioned database to the newly created partitioned tables. This was done up to the most recent month as the source database appended records only
    5. Each tablespace had a single backup performed and stored for safe keeping so that recovery was possible. The backups would be retained up to the containing data being 7 years old.
    6. At the time of cutover, we had a small outage as the destination address of the requests inserting the records had to be re-routed and during that window we copied across all remaining data. The new 19c database was now live.
    7. Index creation was then performed outside of normal hour

The exercise took 14 days from start to finish.

The Outcomes

The client now has a database that can be supported by Oracle with latest patches. The performance of writing to the large tables has significantly reduced but the process of querying the data within a month is massively reduced. Commvault licensing costs have been avoided, and the client now has a fully redundant database that is high performing at both ingestion and returning results and is up to date with the latest security updates.

What started as a daunting exercise that appeared very difficult to achieve resulted in a unique exercise that challenged the team and rewarded them with a great set of outcomes. This exercise is a testament that Corvus IT operates as a team and together we worked through the theoretical challenges and jointly designed the way forward -- and that design was correct and the time for actual delivery was under the budgeted times which had allowances for issues that we had avoided.

Just to reiterate, we went from a single 50+ TB database out of support with no redundancy and backup challenges that was slow to ingest data and difficult to query data to a supported database that is fully replicated and backed up with low-latency ingestion and querying of data. This database took massive steps forward in governance, performance and resilience and both the client and Corvus IT could not be happier than the end result we delivered. Contact us if we can do the difficult for you.

Real Solutions

Transforming Businesses Like Yours

Find out what we¡¯ve done for enterprises like yours, and what we can do for your business needs.
Speak to our Senior Technical Team now
Contact Us Now