Skip to content

Commit

Permalink
Version bump.
Browse files Browse the repository at this point in the history
  • Loading branch information
ghislainfourny committed Nov 18, 2020
1 parent 2461301 commit f08749f
Show file tree
Hide file tree
Showing 6 changed files with 16 additions and 16 deletions.
4 changes: 2 additions & 2 deletions docs/Getting started.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,14 +39,14 @@ Create, in the same directory as Rumble to keep it simple, a file data.json and

In a shell, from the directory where the rumble .jar lies, type, all on one line:

spark-submit spark-rumble-1.9.0.jar --shell yes
spark-submit spark-rumble-1.9.1.jar --shell yes
The Rumble shell appears:

____ __ __
/ __ \__ ______ ___ / /_ / /__
/ /_/ / / / / __ `__ \/ __ \/ / _ \ The distributed JSONiq engine
/ _, _/ /_/ / / / / / / /_/ / / __/ 1.9.0 "Scots pine" beta
/ _, _/ /_/ / / / / / / /_/ / / __/ 1.9.1 "Scots pine" beta
/_/ |_|\__,_/_/ /_/ /_/_.___/_/\___/


Expand Down
8 changes: 4 additions & 4 deletions docs/HTTPServer.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

Rumble can be run as an HTTP server that listens for queries. In order to do so, you can use the --server and --port parameters:

spark-submit spark-rumble-1.9.0.jar --server yes --port 8001
spark-submit spark-rumble-1.9.1.jar --server yes --port 8001

This command will not return until you force it to (Ctrl+C on Linux and Mac). This is because the server has to run permanently to listen to incoming requests.

Expand Down Expand Up @@ -92,19 +92,19 @@ Then there are two options
- Connect to the master with SSH with an extra parameter for securely tunneling the HTTP connection (for example `-L 8001:localhost:8001` or any port of your choosing)
- Download the Rumble jar to the master node

wget https://github.com/RumbleDB/rumble/releases/download/v1.9.0/spark-rumble-1.9.0.jar
wget https://github.com/RumbleDB/rumble/releases/download/v1.9.1/spark-rumble-1.9.1.jar

- Launch the HTTP server on the master node (it will be accessible under `http://localhost:8001/jsoniq`).

spark-submit spark-rumble-1.9.0.jar --server yes --port 8001
spark-submit spark-rumble-1.9.1.jar --server yes --port 8001

- And then use Jupyter notebooks in the same way you would do it locally (it magically works because of the tunneling)

### With the EC2 hostname

There is also another way that does not need any tunnelling: you can specify the hostname of your EC2 machine (copied over from the EC2 dashboard) with the --host parameter. For example, with the placeholder <ec2-hostname>:

spark-submit spark-rumble-1.9.0.jar --server yes --port 8001 --host <ec2-hostname>
spark-submit spark-rumble-1.9.1.jar --server yes --port 8001 --host <ec2-hostname>

You also need to make sure in your EMR security group that the chosen port (e.g., 8001) is accessible from the machine in which you run your Jupyter notebook. Then, you can point your Jupyter notebook on this machine to `http://<ec2-hostname>:8001/jsoniq`.

Expand Down
12 changes: 6 additions & 6 deletions docs/Run on a cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,21 +5,21 @@ simply by modifying the command line parameters as documented [here for spark-su

If the Spark cluster is running on yarn, then the --master option can be changed from local[\*] to yarn compared to the getting started guide. Most of the time, though (e.g., on Amazon EMR), it needs not be specified, as this is already set up in the environment.

spark-submit spark-rumble-1.9.0.jar --shell yes
spark-submit spark-rumble-1.9.1.jar --shell yes
or explicitly:

spark-submit --master yarn --deploy-mode client spark-rumble-1.9.0.jar --shell yes
spark-submit --master yarn --deploy-mode client spark-rumble-1.9.1.jar --shell yes

You can also adapt the number of executors, etc.

spark-submit --num-executors 30 --executor-cores 3 --executor-memory 10g
spark-rumble-1.9.0.jar --shell yes
spark-rumble-1.9.1.jar --shell yes

The size limit for materialization can also be made higher with --materialization-cap (the default is 200). This affects the number of items displayed on the shells as an answer to a query, as well as any materializations happening within the query with push-down is not supported. Warnings are issued if the cap is reached.

spark-submit --num-executors 30 --executor-cores 3 --executor-memory 10g
spark-rumble-1.9.0.jar
spark-rumble-1.9.1.jar
--shell yes --materialization-cap 10000

## Creation functions
Expand Down Expand Up @@ -59,15 +59,15 @@ Note that by default only the first 1000 items in the output will be displayed o
Rumble also supports executing a single query from the command line, reading from HDFS and outputting the results to HDFS, with the query file being either local or on HDFS. For this, use the --query-path, --output-path and --log-path parameters.

spark-submit --num-executors 30 --executor-cores 3 --executor-memory 10g
spark-rumble-1.9.0.jar
spark-rumble-1.9.1.jar
--query-path "hdfs:///user/me/query.jq"
--output-path "hdfs:///user/me/results/output"
--log-path "hdfs:///user/me/logging/mylog"

The query path, output path and log path can be any of the supported schemes (HDFS, file, S3, WASB...) and can be relative or absolute.

spark-submit --num-executors 30 --executor-cores 3 --executor-memory 10g
spark-rumble-1.9.0.jar
spark-rumble-1.9.1.jar
--query-path "/home/me/my-local-machine/query.jq"
--output-path "/user/me/results/output"
--log-path "hdfs:///user/me/logging/mylog"
Expand Down
4 changes: 2 additions & 2 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ After successful completion, you can check the `target` directory, which should

The most straightforward to test if the above steps were successful is to run the Rumble shell locally, like so:

$ spark-submit target/spark-rumble-1.9.0.jar --shell yes
$ spark-submit target/spark-rumble-1.9.1.jar --shell yes

The Rumble shell should start:

Expand All @@ -73,7 +73,7 @@ The Rumble shell should start:
____ __ __
/ __ \__ ______ ___ / /_ / /__
/ /_/ / / / / __ `__ \/ __ \/ / _ \ The distributed JSONiq engine
/ _, _/ /_/ / / / / / / /_/ / / __/ 1.9.0 "Scots pine" beta
/ _, _/ /_/ / / / / / / /_/ / / __/ 1.9.1 "Scots pine" beta
/_/ |_|\__,_/_/ /_/ /_/_.___/_/\___/

Master: local[2]
Expand Down
2 changes: 1 addition & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@

<groupId>com.github.rumbledb</groupId>
<artifactId>spark-rumble</artifactId>
<version>1.9.0</version>
<version>1.9.1</version>
<packaging>jar</packaging>
<name>Rumble</name>
<description>A JSONiq engine to query large-scale JSON datasets stored on HDFS. Spark under the hood.</description>
Expand Down
2 changes: 1 addition & 1 deletion src/main/resources/assets/banner.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
____ __ __
/ __ \__ ______ ___ / /_ / /__
/ /_/ / / / / __ `__ \/ __ \/ / _ \ The distributed JSONiq engine
/ _, _/ /_/ / / / / / / /_/ / / __/ 1.9.0 "Ficus Bonsai" beta
/ _, _/ /_/ / / / / / / /_/ / / __/ 1.9.1 "Ficus Bonsai" beta
/_/ |_|\__,_/_/ /_/ /_/_.___/_/\___/

0 comments on commit f08749f

Please sign in to comment.