Oracle Top Tips - Getting the environment right for a smooth rollout
When project timelines are tight (when are they ever anything else), it’s easy to get carried away and leap straight into deploying your application and database. But using a structured approach helps a smooth rollout and having a playbook ensures that you don’t overlook important steps.
Getting your environment right is easily done when things are being built, but far more difficult when services are running and outages have to be arranged with any number of stakeholders.
This article is a collaboration between Ian Harrison and Kristof Vangeenderhuysen Senior Database Advisor at MONIN (monin-it.be), a DBmarlin partner and provider of database management services. Between them, they have over 40 years of Oracle DBA experience, designing and implementing databases for reliable performance (and MONIN’s experienced DBA team can boast over 100 years of experience!).
So without further ado here are their top tips for getting your environment right!
Top Tip # 1: Document Everything
Although it might seem like wasted energy while you are in the dev/test stages of your release cycle, try to ensure you know how your key business transactions perform - this includes documenting the execution path and the expected I/O per execution.
Make a note of the expected maximum execution time so that if performance degrades over time, you know what ‘good’ looks like. Along with your queries, make sure you document initial start sizes for your segments and projected growth.
This makes it a lot easier down the line when something grows exponentially and your key transaction’s execution plan changes.
Documenting your code with inline comments is a great way of ensuring the ongoing supportability of your system. This makes it that much easier for someone to understand and debug the code later on.
Whilst we’re documenting, make sure you document the server/database and its intended use - again, this is critical for supportability as it’s difficult to prioritise issue resolution if no one has heard of your database.
Top Tip # 2: Get the Server and Instance Setup Right
While the 80/20 rule says that you will achieve only 20% of performance improvements through the instance tuning and 80% from SQL tuning, this doesn’t mean tuning your instance to your server’s capabilities should be avoided.
Assuming you have a dedicated database server (which we recommend), the ready reckoner is to ensure that database instances total no more than 75 - 80% of the server memory.
If you have a mixed-use server, then make allowance for user and application memory footprints too. Don’t allocate so much memory to your databases that you make your system page-fault as hard paging is really bad for performance.
On the subject of memory, Linux server
HugePages are strongly recommended. Ensure you explicitly set this rather than relying on Transparent HugePages implementations. The disadvantage of Transparent HugePages is that parts of your database memory may still be paged out if the O/S starts struggling. Make sure you allocate enough HugePages for all your instances to fit inside.
Once you have set
HugePages in the operating system, set
use_large_pages in your instances to ensure they use this memory. We recommend setting
use_large_pages = ONLY. This ensures that either your instance correctly uses huge page memory, or fails to start which indicates you need to check your sums.
Top Tip # 3: Get the storage setup right
For your disk layout, make it compliant with Oracle’s Optimal Flexible Architecture (OFA) and remember that on write-intensive systems, your fast recovery area (FRA) can become very busy with log switches and back-ups.
These log switches can significantly affect performance, so remember to plan your log size and layout carefully. Remember that on extremely write intensive systems, the archiver processes will be adding to the IO load as they copy the logs.
Make sure you follow recommended filesystem mount options for the different database areas, as features like filesystem journaling make no sense when the database is logging all transactions. Consider minimising the software layers in your stack by creating the filesystem directly on the disk instead of on a partitioned disk with LVM on top. There is a trade-off here, as standardisation of approach for supportability is important at the server level.
Use Automated Storage Manager (ASM) which is Oracle’s preferred storage management tool for databases.
Top Tip # 4: Make Connections
Ensure your application server is pooling connections to avoid constant logging on and off as very high connection rates impose a load on the database through authentication and audit trail actions.
Ensure you know how many connections your app servers should make on a day-to-day basis and double it to give yourself a good margin catering for peak loads.
Top Tip # 5: Code your Queries for Efficiency
The tuning tenets in SQL are clear; making your queries as efficient as possible will ensure they remain performant when production volumes scale up as your least likely business prospects have an unexpected success! Avoid complex code sets such as having views querying other views with an additional where clause on top of this view. The more complex the query, the more complex (and labour intensive) tuning your execution plans will become.
Have a look at our SQL Dos and Don’ts blog for some top tips.
Top Tip #6 Plan for Scalability
Planning for scale should be a way of life. It’s much easier to build for scale when you start your project rather than having to tune it when it’s live.
Most scalability issues tend to be caused by contention, so implement your system with this in mind and design out possible sources.
Keep in mind:
Hot tables: These can contain key application data. Inside these key tables, there can be ‘hot blocks’ which can cause issues due to continual updates and selects. Consider the possibilities of using procedures to help here.
Consistent Reads: Watch out for ‘Consistent reads’ in
V$SESSTAT. This can indicate that a large update is happening somewhere, forcing Oracle to copy blocks in the buffer cache to maintain read consistency for other sessions querying the same data.
Locking: Make sure your application only makes locks when it needs to and even then, make sure you lock for the minimum duration. Long-lived locks can not only block other users’ access, but can cause the database to start creating read-consistent blocks (see above).
Concurrency: Avoid having multiple processes doing the same thing, such as having two jobs running at the same time, updating the same set of data. Besides causing blocking lock issues, you can start seeing deadlock issues as well.
Top Tip #7 Plan for Business Continuity
This is an area that deserves a blog all of its own! But the short version is to ensure your backups work (and keep working) - and do not impact the application performance.
Make sure you have backup and recovery plans, and have communicated them to your client/business so they understand what recovery scenarios are possible (and those that aren’t)!
For example, it might not be possible to recover a logical corruption in a specific table alone - tablespace or even database level recoveries may be required.
Include the business stakeholders in decisions around your backup and DR setup. Make sure the setup is in line with the business expectations in terms of Recovery Time Objective and Recovery Point Objectives (RTO/RPO); you want to avoid surprises in the future. For example, if the business expects zero data loss, then you will have no choice other than setting up a standby database. But this setup can have a (huge) impact on performance, so this will need to be taken into account during the design - not when the environment is live.
Most important of all, make sure that you test your recovery procedures, and you can actually get a system up and running from one of your backups.
Top Tip #8 Think about your data archives
During the design, think about your data archives. Legal requirements or business expectations might require you to keep a history of all your past transactions. Think about where and how you will store this data.
If you store this historical data (which will probably never change again) in the same table as your live production data, this could impact your production performance. For example, if your business process creates 10,000 rows per year and you keep data for 10 years, your table will contain 100,000 rows. When your current year processing accesses the data, you will have to be selective and your queries will have to exclude 90% of the data.
Consider offloading this archive data to a separate database, or at least to separate tables in the same database, so you can separate your production load from your archive load. This approach can save costs through the use of cheaper (compressed?) storage and maybe even be set read-only so it’s no longer backed up every day/week.
Or take a look at partitioning. Again - do this at design time rather than when live services are running. Although possible, you want to avoid implementing partitioning when live as performance will be degraded.
If you have a large amount of reporting on the historical data, perhaps for business insights, then you should offload this work. Maybe into a read-only copy of the database.
Many options, many possibilities, but do think about them from the beginning.
Top Tip #9 Take care of your data(base)
As good as your initial setup might be, over time your database will become polluted by the insert/update/delete actions. Tables and indexes become fragmented and this will give performance issues if you don’t proactively take care of your data.
Put maintenance plans in place, where you rebuild the indexes and defragment the tables (don’t forget to look at those LOBS!). Depending on your database release (Enterprise Edition / Standard Edition), this might be possible online, but might also require downtime. In either case, doing it regularly will avoid unnecessary application / webshop / outages later on.
Top Tip #10 Setup monitoring from the start
It’s no good waiting until you have your first production problem before thinking about monitoring. Monitoring must be built in from the start otherwise you won’t have the history to know when a problem started or what might have triggered it. Think about all your key performance metrics and the tool or tools you need to measure them. Make sure any alerting thresholds are set to appropriate levels so as not to trigger too many or too few alerts.
If your organisation doesn’t already have a preferred monitoring tool or is looking to update to something more modern, we highly recommend taking a look at DBmarlin which covers not only Oracle, but a wide variety of database types running in the cloud or self-hosted.
These database environment tips should make your implementations easier. If you can introduce some of these as standard steps in your project plans, then they will become automatic and your business will reap the benefit of good service stability and performance.
Let us know what you think about these tips and if there are any that we missed.