Package

io.snappydata

examples

Permalink

package examples

Visibility
  1. Public
  2. All

Type Members

  1. case class Data(col1: Int, col2: Int, col3: Int) extends Product with Serializable

    Permalink
  2. class JavaAirlineDataJob extends AnyRef

    Permalink
  3. class JavaCreateAndLoadAirlineDataJob extends JavaSnappySQLJob

    Permalink
  4. class JavaTwitterPopularTagsJob extends JavaSnappyStreamingJob

    Permalink
  5. case class TwitterSchema(retweetCnt: Int, retweetTxt: String) extends Product with Serializable

    Permalink

Value Members

  1. object AirlineDataJob extends SnappySQLJob

    Permalink

    Fetches already created tables.

    Fetches already created tables. Airline table is already persisted in Snappy store. Cache the airline table in Spark cache as well for comparison. Sample airline table and persist it in Snappy store. Run a aggregate query on all the three tables and return the results in a Map.This Map will be sent over REST.

    Run this on your local machine:

    $ sbin/snappy-start-all.sh

    Create tables

    $ ./bin/snappy-job.sh submit --lead localhost:8090 \ --app-name CreateAndLoadAirlineDataJob --class io.snappydata.examples.CreateAndLoadAirlineDataJob \ --app-jar $SNAPPY_HOME/examples/jars/quickstart.jar

    $ ./bin/snappy-job.sh submit --lead localhost:8090 \ --app-name AirlineDataJob --class io.snappydata.examples.AirlineDataJob \ --app-jar $SNAPPY_HOME/examples/jars/quickstart.jar

  2. object AirlineDataSparkApp

    Permalink

    This application depicts how a Spark cluster can connect to a Snappy cluster to fetch and query the tables using Scala APIs in a Spark App.

    This application depicts how a Spark cluster can connect to a Snappy cluster to fetch and query the tables using Scala APIs in a Spark App.

    Run this on your local machine:

    Start snappy cluster

    $ sbin/snappy-start-all.sh

    Start spark cluster

    $ sbin/start-all.sh

    Create tables

    $ ./bin/snappy-job.sh submit --lead localhost:8090 \ --app-name CreateAndLoadAirlineDataJob \ --class io.snappydata.examples.CreateAndLoadAirlineDataJob \ --app-jar $SNAPPY_HOME/examples/jars/quickstart.jar

    $ ./bin/spark-submit --class io.snappydata.examples.AirlineDataSparkApp \ --master spark://<hostname>:7077 --conf snappydata.connection=localhost:1527 \ $SNAPPY_HOME/examples/jars/quickstart.jar

  3. object CreateAndLoadAirlineDataJob extends SnappySQLJob

    Permalink

    Creates and loads Airline data from parquet files in row and column tables.

    Creates and loads Airline data from parquet files in row and column tables. Also samples the data and stores it in a column table.

    Run this on your local machine:

    $ sbin/snappy-start-all.sh

    $ ./bin/snappy-job.sh submit --lead localhost:8090 \ --app-name CreateAndLoadAirlineDataJob --class io.snappydata.examples.CreateAndLoadAirlineDataJob \ --app-jar $SNAPPY_HOME/examples/jars/quickstart.jar

  4. object DataUpdateJob extends SnappySQLJob

    Permalink

    An example to use ConnectionUtil to mutate data stored in SnappyData.

  5. object SampleApp

    Permalink
  6. object StreamingUtils

    Permalink
  7. object TwitterPopularTagsJob extends SnappyStreamingJob

    Permalink

    Run this on your local machine:

    Run this on your local machine:

    $ sbin/snappy-start-all.sh

    To run with live twitter streaming, export twitter credentials

    $ export APP_PROPS="consumerKey=<consumerKey>,consumerSecret=<consumerSecret>, \ accessToken=<accessToken>,accessTokenSecret=<accessTokenSecret>"

    $ ./bin/snappy-job.sh submit --lead localhost:8090 \ --app-name TwitterPopularTagsJob --class io.snappydata.examples.TwitterPopularTagsJob \ --app-jar $SNAPPY_HOME/examples/jars/quickstart.jar --stream

    To run with stored twitter data, run simulateTwitterStream after the Job is submitted:

    $ ./quickstart/scripts/simulateTwitterStream

Ungrouped