For any of you who’ve struggled trying to tweak degrees of parallelism (DoP) and affinity masks in multi-core SQL Servers to chop down query times, I have some good news and some bad news.
The bad news first: your well-intentioned attempts are being defeated by an insidious collection of choke points outside of your control that afflict I/O-intensive workloads, OLTP in particular. The good news: these choke points have recently been isolated.
Seasoned practitioners may be thinking, “nothing novel here, we’ve known storage to be the culprit for years.” And while that’s one sure place to look, that’s not where I’m going with this.
The newly exposed choke points lie above the storage interfaces and below the SQL Server boundary. Their discovery, like that of black holes in astronomy, took some deduction. They were detected only after sophisticated caching algorithms and ultra-fast flash devices had dramatically cut down I/O latencies from milliseconds to microseconds. Despite such advances, the expected improvement in transaction rates for highly concurrent OLTP environments failed to materialize.
Instead, an especially odd behavior is frequently observed. Judging by low CPU utilization (20-35%) and fair amounts of free memory, the systems appear to be doing very little work. What gives?
It turns out the existence of these “I/O black holes” dates back to the times when processor clock-speeds flattened and CPU cores multiplied. But that’s a story for another blog.
What’s important is that now these choke points have been revealed, clever new software technologies such as our own Parallel I/O can navigate around their time-sucking effects to yield spectacular results. How does it work? Join our free webcast to find out.