Often times, from a computing perspective, one must run a function on a large amount of input. Often times, the same function must be run on many pieces of input, and this is a very expensive process unless the work can be done in parallel.
Shard-Query introduces set based processing, which on the surface appears to be similar to other technologies on the market today. However, the scaling features of Shard-Query are just a side effect of the fact that it operates on sets in parallel. Any set can be operated on to any arbitrary degree of parallelism up to, and including, the cardinality of the set.
Given that:
- It is often possible to arbitrarily transform one type of expression into a different, but compatible type for computational purposes as long as the conversion is bidirectional
- An range operation over a set of integers or dates can be transformed into one or more discrete sub-ranges
- Any operation on an entire set is the same as running that operation on each item individually.
- After expanding a set operation into N discrete dimensions, it can always be collapsed back into a one dimensional set.
- Arrays are sets
Treating a general purpose computer as an SIMD computer is possible because in a set, you can perform operations on all of the items independently. The SIMD processor simply needs to wait for all parallel operations on its input to complete. Parallelism is embarrassing and the maximum degree of parallelism is easily enforced with queue.
Today I am going to show you how to take almost any function, and treat any size cluster of Turing computers as a specialized purpose SIMD computer with respect to your function. The SQL interface to Shard-Query imposes a wait for all the workers to complete, but you can register a callback function to handle the output of each input asynchronously, if you like.
Right now I believe this only works on finite sets. I’ve decided to show how to count the number of unique words, an how many times those words appear in a document. Set based processing of course works on sets. A document is a set of words.
Before you read further, I want to tell you why I’ve decided to use the words of the Constitution of the United States of America as an example. It is my favourite document in the world. It speaks of honesty, and integrity, truth and openness. I believe in all of these things. I believe, that with cheap computation, our world can become an amazing place. Please use this technology constructively and for peaceful purposes. Love one another and let’s solve all the complex problems in the world together in peace and harmony.
The following is a somewhat naive example, since grammar will not be taken into account. I start by splitting our document into a list of words:
cat /tmp/constitution.txt | sed -e’s/ /n/g’ > words
I am going to perform the following operations on every word in the constitution:
1) compute the md5 of every item
2) compute the md5 on the reverse of every item
3) count the total number of words
4) count the frequency of words
5) order by the frequency of words, then by the md5 of the word, then by the md5 of the reverse of the word.
6) determine the number of unique words. This is not projected, but you can infer it from the number of items in the output set.
The US Constitution is not very large. I inflated the document size significantly to over 3 million “words” by duplicating the entire set multiple times.
mysql> load data infile ‘/tmp/words’ into table words (chars);
Query OK, 6033 rows affected (0.01 sec)
Records: 6033 Deleted: 0 Skipped: 0 Warnings: 0
I blow up the size of the words table and I create words2. This is the data upon which we will operate:
create table words2 partition by hash(bucket) partitions 12 as select id % 6 bucket, chars words from words; Query OK, 3088896 rows affected (2.41 sec) Records: 3088896 Duplicates: 0 Warnings: 0
Here is the serial version as run by the native database interface (MySQL):
mysql> select word, md5(word), md5(reverse(word)), count(*) from words2 group by 1,2,3 order by 3,1 desc; ... | Legislature | 4380f755e4150b1c11f0ae9ca1910bcb | fecd2758f3c64c8176ce60c4ff7c1cf3 | 3072 | | consent | 9d721d9a89406a2a6861efaae44a785f | fede6baff4c3716c37a3c60bf4051b3f | 512 | | admit, | d803450bb41af1f7372af6ddc8e42d14 | fee1e6f166edfccd849fe4438eb1924f | 512 | | Affirmation. | 9568b7e19ee3da70d3e486134add2743 | fee5d3a27ec5be41941b5689f70c5587 | 512 | | may, | 289cf5ceddb80bab96c92de0a918e122 | fee80b247ce32faca9de1a031119533c | 1024 | | legislatures | 0640c734a3d25eed18126c7db6a39523 | ff238c73fea4086c10cda4a46aeb9d9a | 1024 | | Time | a76d4ef5f3f6a672bbfab2865563e530 | ff38a346616fc8a4df42c7f6c95bf1cc | 2048 | | Congress: | 873c419d2c2139bc8bbc3cbaffcc3473 | ff592a4dac2aa93c8a0589898885fe48 | 512 | | Charles | 399423ff652ebb6a6701be7ec3202fc6 | ffac637b74c0f062904ab466d9bf9e01 | 1024 | | impairing | 1c718d732bc6f6805835f8be6ef6e43e | ffc86c559e06009a743d891ce1e4fc4f | 512 | +------------------+----------------------------------+----------------------------------+----------+ 1427 rows in set (4.94 sec)
1427 rows in set (5.00 sec)
1427 rows in set (5.03 sec)
1427 rows in set (5.00 sec)
Since the data fits in memory speed is near constant and the single threaded operation burns one CPU.
To help completely demonstrate how Shard-Query makes parallel set operations work, I’ll operate in only one dimension for the first example, just like the MySQL client. This will be a linear operation because Shard-Query has no idea how to add parallelism in this case. It is data set agnostic, operating only on sets, not relations. If it were smarter it would ask the data dictionary about partitioning.
Array ( [word] => Congress: [md5(word)] => 873c419d2c2139bc8bbc3cbaffcc3473 [md5(reverse(word))] => ff592a4dac2aa93c8a0589898885fe48 [count(*)] => 512 ) Array ( [word] => Charles [md5(word)] => 399423ff652ebb6a6701be7ec3202fc6 [md5(reverse(word))] => ffac637b74c0f062904ab466d9bf9e01 [count(*)] => 1024 ) Array ( [word] => impairing [md5(word)] => 1c718d732bc6f6805835f8be6ef6e43e [md5(reverse(word))] => ffc86c559e06009a743d891ce1e4fc4f [count(*)] => 512 ) 1427 rows returned (5.0057470798492s, 4.9994130134583s, 0.0063340663909912s)
The set of three numbers are wall clock time (as calculated by microtime()), SQL execution time, and parse time, respectively.
Actually performance is a little worse. This is not unexpected. Since Shard-Query must add at small amount of overhead, a single threaded operation may be slower than the same operation on the native database.
That doesn’t matter because Shard-Query is a smart database proxy that can add parallelism. In this mode it will add additional six degrees of parallelism the query:
Array ( [word] => Congress: [md5(word)] => 873c419d2c2139bc8bbc3cbaffcc3473 [md5(reverse(word))] => ff592a4dac2aa93c8a0589898885fe48 [count(*)] => 342 ) Array ( [word] => Charles [md5(word)] => 399423ff652ebb6a6701be7ec3202fc6 [md5(reverse(word))] => ffac637b74c0f062904ab466d9bf9e01 [count(*)] => 598 ) Array ( [word] => impairing [md5(word)] => 1c718d732bc6f6805835f8be6ef6e43e [md5(reverse(word))] => ffc86c559e06009a743d891ce1e4fc4f [count(*)] => 512 ) 1427 rows returned (0.87930011749268s, 0.87229418754578s, 0.0070059299468994s)
Why six degrees of parallelism? Because that is how many physical cores are connected to my bus, and because I chose to create six hash “buckets” in the table. This allows MySQL to set up a sequential scan over the items in this bucket, particularly since we are examining all the items. We operate on all the buckets and then use intelligent expression substitution to put the results back together, when necessary. When sorting or grouping are used, a final pass over the final result may be necessary, and this may add a small amount of serialization at the end.
How does this work?
Here is the most important part of the explain plan in the mode without parallelism. Notice that there is only one query. If your database system can not provide native parallelism, then performance will be poor.
-- SQL TO SEND TO SHARDS: Array ( [0] => SELECT word AS `word`,md5(word) AS `md5(word)`,md5(reverse(word)) AS `md5(reverse(word))`,COUNT(*) AS `count(*)` FROM words2 AS `words2` GROUP BY 1,2,3 ORDER BY NULL )
The other important optimization combines results from multiple queries together. This query is single threaded, and thus this serves no purpose. It will be much more important in a moment.
-- AGGREGATION SQL: SELECT `word`,`md5(word)`,SUM(`count(*)`) AS `count(*)` FROM `aggregation_tmp_39323566` GROUP BY 1,2 ORDER BY 1 ASC ON DUPLICATE KEY UPDATE `word`=VALUES(`word`), `md5(word)`=VALUES(`md5(word)`), `count(*)`=`count(*)` + VALUES(`count(*)`)
Now, consider the query with BETWEEN 1 and 6 added to the where clause. This creates boundary conditions for our query. Any set of integers can be broken up into as many items are the set contains, and thus it is possible to convert the BETWEEN expression into a partition elimination expression.
Here is the output from the parallel version:
-- SQL TO SEND TO SHARDS: Array ( [0] => SELECT word AS `word`,md5(word) AS `md5(word)`,md5(reverse(word)) AS `md5(reverse(word))`,COUNT(*) AS `count(*)` FROM words2 AS `words2` WHERE bucket = 1 GROUP BY 1,2,3 ORDER BY NULL [1] => SELECT word AS `word`,md5(word) AS `md5(word)`,md5(reverse(word)) AS `md5(reverse(word))`,COUNT(*) AS `count(*)` FROM words2 AS `words2` WHERE bucket = 2 GROUP BY 1,2,3 ORDER BY NULL [2] => SELECT word AS `word`,md5(word) AS `md5(word)`,md5(reverse(word)) AS `md5(reverse(word))`,COUNT(*) AS `count(*)` FROM words2 AS `words2` WHERE bucket = 3 GROUP BY 1,2,3 ORDER BY NULL [3] => SELECT word AS `word`,md5(word) AS `md5(word)`,md5(reverse(word)) AS `md5(reverse(word))`,COUNT(*) AS `count(*)` FROM words2 AS `words2` WHERE bucket = 4 GROUP BY 1,2,3 ORDER BY NULL [4] => SELECT word AS `word`,md5(word) AS `md5(word)`,md5(reverse(word)) AS `md5(reverse(word))`,COUNT(*) AS `count(*)` FROM words2 AS `words2` WHERE bucket = 5 GROUP BY 1,2,3 ORDER BY NULL [5] => SELECT word AS `word`,md5(word) AS `md5(word)`,md5(reverse(word)) AS `md5(reverse(word))`,COUNT(*) AS `count(*)` FROM words2 AS `words2` WHERE bucket = 6 GROUP BY 1,2,3 ORDER BY NULL )
This powers the UPSERT. The results from the six branches are combined with this.
-- AGGREGATION SQL: SELECT `word`,`md5(word)`,SUM(`count(*)`) AS `count(*)` FROM `aggregation_tmp_27656998` GROUP BY 1,2 ORDER BY 1 ASC ON DUPLICATE KEY UPDATE `word`=VALUES(`word`), `md5(word)`=VALUES(`md5(word)`), `count(*)`=`count(*)` + VALUES(`count(*)`)
All six branches compute fully in parallel.
Finally, I would like to note that this set has a cardinality of 3088896. This is the maximum theoretical degree of parallelism that this data set can achieve with my method. Likely current network technologies can not support such a degree.
mysql> select min(id),max(id) from words; +---------+---------+ | min(id) | max(id) | +---------+---------+ | 1 | 3088896 | +---------+---------+ 1 row in set (0.00 sec)
How to factor numbers
create table dataset (
id bigint auto_increment primary key,
prime_number bigint
);
spread this over as many machines as necessary using the method above, assuming there are any number of primes which you want to divide, and you want to use 1024 way parallelism:
select min(result) == 0 is_prime from ( select mod({$NUMBER_TO_FACTOR}/prime_number) from dataset where id between 1 and 1024 ) parallel_work;
This allows you to use data flow semantics on any cluster with respect to almost any generic computational function.
The post Using any general purpose computer as a special purpose SIMD computer appeared first on MySQL Performance Blog.