Huge pages are a Linux feature that may improve the performance of GT.M applications in production. Huge pages create a single page table entry for a large block (typically 2MiB) of memory in place of hundreds entries for many smaller (typically 4KiB) blocks. This reduction of memory used for page tables frees memory for other uses, such as file system caches, and increases the probability of TLB (translation lookaside buffer) matches, both of which can improve performance. The performance improvement related to reducing the page table size becomes evident when many processes share memory as they do for global buffers, journal buffers, and replication journal pools. Configuring huge pages on Linux for x86 or x86_64 CPU architectures help improve:
GT.M shared memory performance: When your GT.M database uses journaling, replication, and the BG access method.
GT.M process memory performance: For your process working space and dynamically linked code.
Note | |
---|---|
At this time, huge pages have no effect for MM databases; the text, data, or bss segments for each process; or for process stack. |
While FIS recommends you configure huge page for shared memory, you need to evaluate whether or not configuring huge page for process-private memory is appropriate for your application. Having insufficient huge pages available during certain commands (for example, a JOB command - see complete list below) can result in a process terminating with a SIGBUS error. This is a current limitation of Linux. Before you use huge pages for process private memory on production systems, FIS recommends that you perform appropriate peak load tests on your application and ensure that you have an adequate number of huge pages configured for your peak workloads or that your application is configured to perform robustly when processes terminate with SIGBUS errors. The following GT.M features fork processes and may generate SIGBUS errors when huge pages are not available-JOB, OPEN of a PIPE device, ZSYSTEM, interprocess signaling that requires the services of gtmsecshr when gtmsecshr is not already running, SPAWN commands in DSE, GDE, and LKE, argumentless MUPIP RUNDOWN, and replication-related MUPIP commands that start server processes and/or helper processes.
Consider the following example of a memory map report of a Source Server process running at peak load:
$ pmap -d 18839 18839: /usr/lib/fis-gtm/V6.2-000_x86_64/mupip replicate -source -start -buffsize=1048576 -secondary=melbourne:1235 -log=/var/log/.fis-gtm/mal2mel.log -instsecondary=melbourne Address Kbytes Mode Offset Device Mapping --- lines removed for brevity ----- mapped: 61604K writeable/private: 3592K shared: 33532K $
Process id 18839 uses a large amount of shared memory (33535K) and can benefit from configuring huge pages for shared memory. Configuring huge pages for shared memory does not cause a SIGBUS error when a process does a fork. For information on configuring huge pages for shared memory, refer to "Using huge pages" and "Using huge pages for shared memory" sections. SIGBUS errors only occur when you configure huge pages for process private memory; these errors indicate you have not configured your system with an adequate number of huge pages. To prevent SIGBUS errors, you should perform peak load tests on your application to determine the number of required huge pages. For information on configuring huge pages for process private memory, refer to "Using huge pages" and "Using huge pages for process working space" sections.
As application response time can be deleteriously affected if processes and database shared memory segments are paged out, FIS recommends configuring systems for use in production with sufficient RAM so as to not require swap space or a swap file. While you must configure an adequate number of huge pages for your application needs as empirically determined by benchmarking / testing, and there is little downside to a generous configuration to ensure a buffer of huge pages available for workload spikes, an excessive allocation of huge pages may affect system throughput by reserving memory for huge pages that could otherwise be used by applications that cannot use huge pages.
Prerequisites |
Notes |
---|---|
A 32- or 64-bit x86 CPU running a Linux kernel with huge pages enabled. |
All currently Supported Linux distributions appear to support huge pages; to confirm, use the command: |
Have sufficient number of huge pages available. |
To reserve Huge Pages boot Linux with the hugepages=num_pages kernel boot parameter; or, shortly after bootup when unfragmented memory is still available, with the command: For subsequent on-demand allocation of Huge Pages, use: These delayed (from boot) actions do not guarantee availability of the requested number of huge pages; however, they are safe as, if a sufficient number of huge pages are not available, Linux simply uses traditional sized pages. |
To use huge pages for shared memory (journal buffers, replication journal pool and global buffers):
Permit GT.M processes to use huge pages for shared memory segments (where available, FIS recommends option 1 below; however not all file systems support extended attributes). Either:
Set the CAP_IPC_LOCK capability needs for your mumps, mupip and dse processes with a command such as:
setcap 'cap_ipc_lock+ep' $gtm_dist/mumps
or
Permit the group used by GT.M processes needs to use huge pages with the following command, which requires root priviliges:
echo gid >/proc/sys/vm/hugetlb_shm_group
Set the environment variable gtm_hugetlb_shm for each process to "yes".
Refer to the documentation of your Linux distribution for details. Other sources of information are:
Note | |
---|---|
|