I recently read two papers on automating memory management - one for DB2, one for Oracle. While the papers aren't new (~2005), they are definitely worth reading. AFAIK both made it into the products, I have no idea how that turned out but the outcome doesn't change my opinion that the papers were excellent. Disclaimer - I worked on query execution for Oracle at the time and made a few changes to the row sources that I maintained to support the work described by the Oracle paper.
Both describe methods to automate memory management with the goal of improving performance. In many DBMS memory management is difficult for several reasons:
- It isn't global and the DBA must estimate good values for the sizes of various caches and other large memory consumers: sort and hash for order by, aggregation, join and index create.
- It isn't bounded when allocations are specified per instance of sort or join rather than one limit on the memory used by all sorts or joins. Too many concurrent queries == OOM in this case.
- It is static. This is an issue for large caches like the buffer pool (block cache).
- Differentiable functions were created to explain the query latency saved per page of memory for a variety of consumers. All functions had the same form (see section 3.2) with graphs that look like this. I think that time saved means wall clock time to account for CPU and IO overheads.
- For a few caches (statement and buffer pool) online simulators were added to the DBMS to estimate changes to the hit rate if the cache were given more memory.
- Constrained optimization was used to determine the optimal allocation at any point in time. The constraint was the amount of memory available. I assume that Lagrange multipliers were used.
- Feedback control was used to apply the desired memory configuration. One benefit from this is to avoid negative impacts from suddenly applying significant changes.
- Decisions are revisited because workloads (types of queries, concurrency) changes.
- At least for sort, the real curve for time vs memory can't be described by a differentiable function. See page 2 in the Oracle paper: there are three intervals where each interval can be explained by a function but the points where the intervals meet are a problem. I am not sure whether constrained optimization can handle that.
- At a higher level, the paper doesn't consider the steps in the time vs memory benefits for some row sources (this is a kind of a repeat of the previous comment).
- It wasn't clear whether row sources could give back memory. There is a risk from giving a long-running sort or hash a lot of memory. If there is no facility to get back some of that memory then memory allocation will be far from optimal until that query completes or is killed.
- for each instance of a row source (a query is composed of multiple row sources, some row sources can use a lot of memory for sort and hash) the knees in the response time vs memory graph are estimated, see section 3.2 in the paper. These knees are a function of the row source and the data specific to a query (query A might be able to sort in-memory with 100M of RAM while query B might require 1G to remain in memory).
- decisions are made about the amount of memory that each row source can used based on the information from the previous point
- over time a row source might be able to get more memory and might be told to give back memory. Giving back memory might not be immediate, but should eventually be done.
- the paper was vague about how point #2 was done. While section 4.2.2 lists 5 rules used to guide the decisions it wasn't clear there was a goal beyond making sure all queries have enough memory to run. In contrast the IBM paper was clear that constrained optimization was used and explains the function that was optimized.
No comments:
Post a Comment