I originally intended this to be a much longer post about memory in
Spark, but I figured it would be useful to just talk about Shuffles
generally so that I could brush over it in the Memory discussion and
just make it a bit more digestible. Shuffles are one of the most
memory/network intensive parts of most Spark jobs so it's important to
understand when they occur and what's going on when you're trying …
Continue reading →
If you've dug around into the Spark SQL code, whether it's in catalyst,
or the core code, you've probably seen tons of references to Logical and
Physical plans. These are the core units of query planning, a common
concept in database programming. In this post we're going to go over
what these are at a high level and then how they're represented and used
in Spark. The query planning code is especially important because it …
Continue reading →
Note: This is being written as of early December of 2015 and currently
assumes Spark 1.5.2 API. The data sources API has been out for a few
versions now but it's still stabilizing so some of this might change to
be out of date.
I'm writing this guide as part of my own exploration on how to write a
data source using the Spark SQL Data Sources API. We'll start with
exploration of …
Continue reading →