Fetches already created tables.
This application depicts how a Spark cluster can connect to a Snappy cluster to fetch and query the tables using Scala APIs in a Spark App.
This application depicts how a Spark cluster can connect to a Snappy cluster to fetch and query the tables using Scala APIs in a Spark App.
Run this on your local machine:
Start snappy cluster
$ sbin/snappy-start-all.sh
Start spark cluster
$ sbin/start-all.sh
Create tables
$ ./bin/snappy-job.sh submit --lead localhost:8090 \
--app-name CreateAndLoadAirlineDataJob \
--class io.snappydata.examples.CreateAndLoadAirlineDataJob \
--app-jar $SNAPPY_HOME/examples/jars/quickstart.jar
$ ./bin/spark-submit --class io.snappydata.examples.AirlineDataSparkApp \
--master spark://<hostname>:7077 --conf snappydata.connection=localhost:1527 \
$SNAPPY_HOME/examples/jars/quickstart.jar
Creates and loads Airline data from parquet files in row and column tables.
Creates and loads Airline data from parquet files in row and column tables. Also samples the data and stores it in a column table.
Run this on your local machine:
$ sbin/snappy-start-all.sh
$ ./bin/snappy-job.sh submit --lead localhost:8090 \
--app-name CreateAndLoadAirlineDataJob --class io.snappydata.examples.CreateAndLoadAirlineDataJob \
--app-jar $SNAPPY_HOME/examples/jars/quickstart.jar
An example to use ConnectionUtil to mutate data stored in SnappyData.
Run this on your local machine:
Run this on your local machine:
$ sbin/snappy-start-all.sh
To run with live twitter streaming, export twitter credentials
$ export APP_PROPS="consumerKey=<consumerKey>,consumerSecret=<consumerSecret>, \
accessToken=<accessToken>,accessTokenSecret=<accessTokenSecret>"
$ ./bin/snappy-job.sh submit --lead localhost:8090 \
--app-name TwitterPopularTagsJob --class io.snappydata.examples.TwitterPopularTagsJob \
--app-jar $SNAPPY_HOME/examples/jars/quickstart.jar --stream
To run with stored twitter data, run simulateTwitterStream after the Job is submitted:
$ ./quickstart/scripts/simulateTwitterStream
Fetches already created tables. Airline table is already persisted in Snappy store. Cache the airline table in Spark cache as well for comparison. Sample airline table and persist it in Snappy store. Run a aggregate query on all the three tables and return the results in a Map.This Map will be sent over REST.
Run this on your local machine:
$ sbin/snappy-start-all.sh
Create tables
$ ./bin/snappy-job.sh submit --lead localhost:8090 \ --app-name CreateAndLoadAirlineDataJob --class io.snappydata.examples.CreateAndLoadAirlineDataJob \ --app-jar $SNAPPY_HOME/examples/jars/quickstart.jar
$ ./bin/snappy-job.sh submit --lead localhost:8090 \ --app-name AirlineDataJob --class io.snappydata.examples.AirlineDataJob \ --app-jar $SNAPPY_HOME/examples/jars/quickstart.jar