public interface AmazonMachineLearning
Definition of the public APIs exposed by Amazon Machine Learning
| Modifier and Type | Field and Description | 
|---|---|
| static String | ENDPOINT_PREFIXThe region metadata service name for computing region endpoints. | 
| Modifier and Type | Method and Description | 
|---|---|
| AddTagsResult | addTags(AddTagsRequest addTagsRequest)
 Adds one or more tags to an object, up to a limit of 10. | 
| CreateBatchPredictionResult | createBatchPrediction(CreateBatchPredictionRequest createBatchPredictionRequest)
 Generates predictions for a group of observations. | 
| CreateDataSourceFromRDSResult | createDataSourceFromRDS(CreateDataSourceFromRDSRequest createDataSourceFromRDSRequest)
 Creates a  DataSourceobject from an  Amazon Relational Database Service
 (Amazon RDS). | 
| CreateDataSourceFromRedshiftResult | createDataSourceFromRedshift(CreateDataSourceFromRedshiftRequest createDataSourceFromRedshiftRequest)
 Creates a  DataSourcefrom a database hosted on an Amazon
 Redshift cluster. | 
| CreateDataSourceFromS3Result | createDataSourceFromS3(CreateDataSourceFromS3Request createDataSourceFromS3Request)
 Creates a  DataSourceobject. | 
| CreateEvaluationResult | createEvaluation(CreateEvaluationRequest createEvaluationRequest)
 Creates a new  Evaluationof anMLModel. | 
| CreateMLModelResult | createMLModel(CreateMLModelRequest createMLModelRequest)
 Creates a new  MLModelusing theDataSourceand
 the recipe as information sources. | 
| CreateRealtimeEndpointResult | createRealtimeEndpoint(CreateRealtimeEndpointRequest createRealtimeEndpointRequest)
 Creates a real-time endpoint for the  MLModel. | 
| DeleteBatchPredictionResult | deleteBatchPrediction(DeleteBatchPredictionRequest deleteBatchPredictionRequest)
 Assigns the DELETED status to a  BatchPrediction, rendering
 it unusable. | 
| DeleteDataSourceResult | deleteDataSource(DeleteDataSourceRequest deleteDataSourceRequest)
 Assigns the DELETED status to a  DataSource, rendering it
 unusable. | 
| DeleteEvaluationResult | deleteEvaluation(DeleteEvaluationRequest deleteEvaluationRequest)
 Assigns the  DELETEDstatus to anEvaluation,
 rendering it unusable. | 
| DeleteMLModelResult | deleteMLModel(DeleteMLModelRequest deleteMLModelRequest)
 Assigns the  DELETEDstatus to anMLModel,
 rendering it unusable. | 
| DeleteRealtimeEndpointResult | deleteRealtimeEndpoint(DeleteRealtimeEndpointRequest deleteRealtimeEndpointRequest)
 Deletes a real time endpoint of an  MLModel. | 
| DeleteTagsResult | deleteTags(DeleteTagsRequest deleteTagsRequest)
 Deletes the specified tags associated with an ML object. | 
| DescribeBatchPredictionsResult | describeBatchPredictions()Simplified method form for invoking the DescribeBatchPredictions
 operation. | 
| DescribeBatchPredictionsResult | describeBatchPredictions(DescribeBatchPredictionsRequest describeBatchPredictionsRequest)
 Returns a list of  BatchPredictionoperations that match the
 search criteria in the request. | 
| DescribeDataSourcesResult | describeDataSources()Simplified method form for invoking the DescribeDataSources operation. | 
| DescribeDataSourcesResult | describeDataSources(DescribeDataSourcesRequest describeDataSourcesRequest)
 Returns a list of  DataSourcethat match the search criteria
 in the request. | 
| DescribeEvaluationsResult | describeEvaluations()Simplified method form for invoking the DescribeEvaluations operation. | 
| DescribeEvaluationsResult | describeEvaluations(DescribeEvaluationsRequest describeEvaluationsRequest)
 Returns a list of  DescribeEvaluationsthat match the search
 criteria in the request. | 
| DescribeMLModelsResult | describeMLModels()Simplified method form for invoking the DescribeMLModels operation. | 
| DescribeMLModelsResult | describeMLModels(DescribeMLModelsRequest describeMLModelsRequest)
 Returns a list of  MLModelthat match the search criteria in
 the request. | 
| DescribeTagsResult | describeTags(DescribeTagsRequest describeTagsRequest)
 Describes one or more of the tags for your Amazon ML object. | 
| GetBatchPredictionResult | getBatchPrediction(GetBatchPredictionRequest getBatchPredictionRequest)
 Returns a  BatchPredictionthat includes detailed metadata,
 status, and data file information for aBatch Predictionrequest. | 
| ResponseMetadata | getCachedResponseMetadata(AmazonWebServiceRequest request)Returns additional metadata for a previously executed successful request,
 typically used for debugging issues where a service isn't acting as
 expected. | 
| GetDataSourceResult | getDataSource(GetDataSourceRequest getDataSourceRequest)
 Returns a  DataSourcethat includes metadata and data file
 information, as well as the current status of theDataSource. | 
| GetEvaluationResult | getEvaluation(GetEvaluationRequest getEvaluationRequest)
 Returns an  Evaluationthat includes metadata as well as the
 current status of theEvaluation. | 
| GetMLModelResult | getMLModel(GetMLModelRequest getMLModelRequest)
 Returns an  MLModelthat includes detailed metadata, data
 source information, and the current status of theMLModel. | 
| PredictResult | predict(PredictRequest predictRequest)
 Generates a prediction for the observation using the specified
  ML Model. | 
| void | setEndpoint(String endpoint)Overrides the default endpoint for this client
 ("https://machinelearning.us-east-1.amazonaws.com"). | 
| void | setRegion(Region region)An alternative to  setEndpoint(String), sets
 the regional endpoint for this client's service calls. | 
| void | shutdown()Shuts down this client object, releasing any resources that might be held
 open. | 
| UpdateBatchPredictionResult | updateBatchPrediction(UpdateBatchPredictionRequest updateBatchPredictionRequest)
 Updates the  BatchPredictionNameof aBatchPrediction. | 
| UpdateDataSourceResult | updateDataSource(UpdateDataSourceRequest updateDataSourceRequest)
 Updates the  DataSourceNameof aDataSource. | 
| UpdateEvaluationResult | updateEvaluation(UpdateEvaluationRequest updateEvaluationRequest)
 Updates the  EvaluationNameof anEvaluation. | 
| UpdateMLModelResult | updateMLModel(UpdateMLModelRequest updateMLModelRequest)
 Updates the  MLModelNameand theScoreThresholdof anMLModel. | 
static final String ENDPOINT_PREFIX
void setEndpoint(String endpoint)
 Callers can pass in just the endpoint (ex:
 "machinelearning.us-east-1.amazonaws.com") or a full URL, including the
 protocol (ex: "https://machinelearning.us-east-1.amazonaws.com"). If the
 protocol is not specified here, the default protocol from this client's
 ClientConfiguration will be used, which by default is HTTPS.
 
For more information on using AWS regions with the AWS SDK for Java, and a complete list of all available endpoints for all AWS services, see: http://developer.amazonwebservices.com/connect/entry.jspa?externalID= 3912
This method is not threadsafe. An endpoint should be configured when the client is created and before any service requests are made. Changing it afterwards creates inevitable race conditions for any service requests in transit or retrying.
endpoint - The endpoint (ex: "machinelearning.us-east-1.amazonaws.com") or a
        full URL, including the protocol (ex:
        "https://machinelearning.us-east-1.amazonaws.com") of the region
        specific AWS endpoint this client will communicate with.void setRegion(Region region)
setEndpoint(String), sets
 the regional endpoint for this client's service calls. Callers can use
 this method to control which AWS region they want to work with.
 
 By default, all service endpoints in all regions use the https protocol.
 To use http instead, specify it in the ClientConfiguration
 supplied at construction.
 
This method is not threadsafe. A region should be configured when the client is created and before any service requests are made. Changing it afterwards creates inevitable race conditions for any service requests in transit or retrying.
region - The region this client will communicate with. See
        Region.getRegion(com.amazonaws.regions.Regions) for
        accessing a given region. Must not be null and must be a region
        where the service is available.Region.getRegion(com.amazonaws.regions.Regions), 
Region.createClient(Class,
      com.amazonaws.auth.AWSCredentialsProvider, ClientConfiguration), 
Region.isServiceSupported(String)AddTagsResult addTags(AddTagsRequest addTagsRequest)
 Adds one or more tags to an object, up to a limit of 10. Each tag
 consists of a key and an optional value. If you add a tag using a key
 that is already associated with the ML object, AddTags
 updates the tag's value.
 
addTagsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InvalidTagExceptionTagLimitExceededExceptionResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.CreateBatchPredictionResult createBatchPrediction(CreateBatchPredictionRequest createBatchPredictionRequest)
 Generates predictions for a group of observations. The observations to
 process exist in one or more data files referenced by a
 DataSource. This operation creates a new
 BatchPrediction, and uses an MLModel and the
 data files referenced by the DataSource as information
 sources.
 
 CreateBatchPrediction is an asynchronous operation. In
 response to CreateBatchPrediction, Amazon Machine Learning
 (Amazon ML) immediately returns and sets the BatchPrediction
 status to PENDING. After the BatchPrediction
 completes, Amazon ML sets the status to COMPLETED.
 
 You can poll for status updates by using the GetBatchPrediction
 operation and checking the Status parameter of the result.
 After the COMPLETED status appears, the results are
 available in the location specified by the OutputUri
 parameter.
 
createBatchPredictionRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This
         can result from retrying a request using a parameter that was not
         present in the original request.CreateDataSourceFromRDSResult createDataSourceFromRDS(CreateDataSourceFromRDSRequest createDataSourceFromRDSRequest)
 Creates a DataSource object from an  Amazon Relational Database Service
 (Amazon RDS). A DataSource references data that can be used
 to perform CreateMLModel, CreateEvaluation, or
 CreateBatchPrediction operations.
 
 CreateDataSourceFromRDS is an asynchronous operation. In
 response to CreateDataSourceFromRDS, Amazon Machine Learning
 (Amazon ML) immediately returns and sets the DataSource
 status to PENDING. After the DataSource is
 created and ready for use, Amazon ML sets the Status
 parameter to COMPLETED. DataSource in the
 COMPLETED or PENDING state can be used only to
 perform >CreateMLModel>, CreateEvaluation
 , or CreateBatchPrediction operations.
 
 If Amazon ML cannot accept the input source, it sets the
 Status parameter to FAILED and includes an
 error message in the Message attribute of the
 GetDataSource operation response.
 
createDataSourceFromRDSRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This
         can result from retrying a request using a parameter that was not
         present in the original request.CreateDataSourceFromRedshiftResult createDataSourceFromRedshift(CreateDataSourceFromRedshiftRequest createDataSourceFromRedshiftRequest)
 Creates a DataSource from a database hosted on an Amazon
 Redshift cluster. A DataSource references data that can be
 used to perform either CreateMLModel,
 CreateEvaluation, or CreateBatchPrediction
 operations.
 
 CreateDataSourceFromRedshift is an asynchronous operation.
 In response to CreateDataSourceFromRedshift, Amazon Machine
 Learning (Amazon ML) immediately returns and sets the
 DataSource status to PENDING. After the
 DataSource is created and ready for use, Amazon ML sets the
 Status parameter to COMPLETED.
 DataSource in COMPLETED or PENDING
 states can be used to perform only CreateMLModel,
 CreateEvaluation, or CreateBatchPrediction
 operations.
 
 If Amazon ML can't accept the input source, it sets the
 Status parameter to FAILED and includes an
 error message in the Message attribute of the
 GetDataSource operation response.
 
 The observations should be contained in the database hosted on an Amazon
 Redshift cluster and should be specified by a SelectSqlQuery
 query. Amazon ML executes an Unload command in Amazon
 Redshift to transfer the result set of the SelectSqlQuery
 query to S3StagingLocation.
 
 After the DataSource has been created, it's ready for use in
 evaluations and batch predictions. If you plan to use the
 DataSource to train an MLModel, the
 DataSource also requires a recipe. A recipe describes how
 each input variable will be used in training an MLModel.
 Will the variable be included or excluded from training? Will the
 variable be manipulated; for example, will it be combined with another
 variable or will it be split apart into word combinations? The recipe
 provides answers to these questions.
 
 You can't change an existing datasource, but you can copy and modify the
 settings from an existing Amazon Redshift datasource to create a new
 datasource. To do so, call GetDataSource for an existing
 datasource and copy the values to a CreateDataSource call.
 Change the settings that you want to change and make sure that all
 required fields have the appropriate values.
 
createDataSourceFromRedshiftRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This
         can result from retrying a request using a parameter that was not
         present in the original request.CreateDataSourceFromS3Result createDataSourceFromS3(CreateDataSourceFromS3Request createDataSourceFromS3Request)
 Creates a DataSource object. A DataSource
 references data that can be used to perform CreateMLModel,
 CreateEvaluation, or CreateBatchPrediction
 operations.
 
 CreateDataSourceFromS3 is an asynchronous operation. In
 response to CreateDataSourceFromS3, Amazon Machine Learning
 (Amazon ML) immediately returns and sets the DataSource
 status to PENDING. After the DataSource has
 been created and is ready for use, Amazon ML sets the Status
 parameter to COMPLETED. DataSource in the
 COMPLETED or PENDING state can be used to
 perform only CreateMLModel, CreateEvaluation or
 CreateBatchPrediction operations.
 
 If Amazon ML can't accept the input source, it sets the
 Status parameter to FAILED and includes an
 error message in the Message attribute of the
 GetDataSource operation response.
 
 The observation data used in a DataSource should be ready to
 use; that is, it should have a consistent structure, and missing data
 values should be kept to a minimum. The observation data must reside in
 one or more .csv files in an Amazon Simple Storage Service (Amazon S3)
 location, along with a schema that describes the data items by name and
 type. The same schema must be used for all of the data files referenced
 by the DataSource.
 
 After the DataSource has been created, it's ready to use in
 evaluations and batch predictions. If you plan to use the
 DataSource to train an MLModel, the
 DataSource also needs a recipe. A recipe describes how each
 input variable will be used in training an MLModel. Will the
 variable be included or excluded from training? Will the variable be
 manipulated; for example, will it be combined with another variable or
 will it be split apart into word combinations? The recipe provides
 answers to these questions.
 
createDataSourceFromS3Request - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This
         can result from retrying a request using a parameter that was not
         present in the original request.CreateEvaluationResult createEvaluation(CreateEvaluationRequest createEvaluationRequest)
 Creates a new Evaluation of an MLModel. An
 MLModel is evaluated on a set of observations associated to
 a DataSource. Like a DataSource for an
 MLModel, the DataSource for an
 Evaluation contains values for the
 Target Variable. The Evaluation compares the
 predicted result for each observation to the actual outcome and provides
 a summary so that you know how effective the MLModel
 functions on the test data. Evaluation generates a relevant performance
 metric, such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on
 the corresponding MLModelType: BINARY,
 REGRESSION or MULTICLASS.
 
 CreateEvaluation is an asynchronous operation. In response
 to CreateEvaluation, Amazon Machine Learning (Amazon ML)
 immediately returns and sets the evaluation status to
 PENDING. After the Evaluation is created and
 ready for use, Amazon ML sets the status to COMPLETED.
 
 You can use the GetEvaluation operation to check progress of
 the evaluation during the creation operation.
 
createEvaluationRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This
         can result from retrying a request using a parameter that was not
         present in the original request.CreateMLModelResult createMLModel(CreateMLModelRequest createMLModelRequest)
 Creates a new MLModel using the DataSource and
 the recipe as information sources.
 
 An MLModel is nearly immutable. Users can update only the
 MLModelName and the ScoreThreshold in an
 MLModel without creating a new MLModel.
 
 CreateMLModel is an asynchronous operation. In response to
 CreateMLModel, Amazon Machine Learning (Amazon ML)
 immediately returns and sets the MLModel status to
 PENDING. After the MLModel has been created and
 ready is for use, Amazon ML sets the status to COMPLETED.
 
 You can use the GetMLModel operation to check the progress
 of the MLModel during the creation operation.
 
 CreateMLModel requires a DataSource with
 computed statistics, which can be created by setting
 ComputeStatistics to true in
 CreateDataSourcceFromRDS,
 CreateDataSourceFromS3, or
 CreateDataSourceFromRedshift operations.
 
createMLModelRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This
         can result from retrying a request using a parameter that was not
         present in the original request.CreateRealtimeEndpointResult createRealtimeEndpoint(CreateRealtimeEndpointRequest createRealtimeEndpointRequest)
 Creates a real-time endpoint for the MLModel. The endpoint
 contains the URI of the MLModel; that is, the location to
 send real-time prediction requests for the specified MLModel
 .
 
createRealtimeEndpointRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.DeleteBatchPredictionResult deleteBatchPrediction(DeleteBatchPredictionRequest deleteBatchPredictionRequest)
 Assigns the DELETED status to a BatchPrediction, rendering
 it unusable.
 
 After using the DeleteBatchPrediction operation, you can use
 the GetBatchPrediction operation to verify that the status of the
 BatchPrediction changed to DELETED.
 
 Caution: The result of the DeleteBatchPrediction
 operation is irreversible.
 
deleteBatchPredictionRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.DeleteDataSourceResult deleteDataSource(DeleteDataSourceRequest deleteDataSourceRequest)
 Assigns the DELETED status to a DataSource, rendering it
 unusable.
 
 After using the DeleteDataSource operation, you can use the
 GetDataSource operation to verify that the status of the
 DataSource changed to DELETED.
 
 Caution: The results of the DeleteDataSource
 operation are irreversible.
 
deleteDataSourceRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.DeleteEvaluationResult deleteEvaluation(DeleteEvaluationRequest deleteEvaluationRequest)
 Assigns the DELETED status to an Evaluation,
 rendering it unusable.
 
 After invoking the DeleteEvaluation operation, you can use
 the GetEvaluation operation to verify that the status of the
 Evaluation changed to DELETED.
 
 The results of the DeleteEvaluation operation are
 irreversible.
 
deleteEvaluationRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.DeleteMLModelResult deleteMLModel(DeleteMLModelRequest deleteMLModelRequest)
 Assigns the DELETED status to an MLModel,
 rendering it unusable.
 
 After using the DeleteMLModel operation, you can use the
 GetMLModel operation to verify that the status of the
 MLModel changed to DELETED.
 
 Caution: The result of the DeleteMLModel operation is
 irreversible.
 
deleteMLModelRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.DeleteRealtimeEndpointResult deleteRealtimeEndpoint(DeleteRealtimeEndpointRequest deleteRealtimeEndpointRequest)
 Deletes a real time endpoint of an MLModel.
 
deleteRealtimeEndpointRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.DeleteTagsResult deleteTags(DeleteTagsRequest deleteTagsRequest)
Deletes the specified tags associated with an ML object. After this operation is complete, you can't recover deleted tags.
If you specify a tag that doesn't exist, Amazon ML ignores it.
deleteTagsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InvalidTagExceptionResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.DescribeBatchPredictionsResult describeBatchPredictions(DescribeBatchPredictionsRequest describeBatchPredictionsRequest)
 Returns a list of BatchPrediction operations that match the
 search criteria in the request.
 
describeBatchPredictionsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InternalServerException - An error on the server occurred when trying to process a request.DescribeBatchPredictionsResult describeBatchPredictions()
DescribeDataSourcesResult describeDataSources(DescribeDataSourcesRequest describeDataSourcesRequest)
 Returns a list of DataSource that match the search criteria
 in the request.
 
describeDataSourcesRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InternalServerException - An error on the server occurred when trying to process a request.DescribeDataSourcesResult describeDataSources()
DescribeEvaluationsResult describeEvaluations(DescribeEvaluationsRequest describeEvaluationsRequest)
 Returns a list of DescribeEvaluations that match the search
 criteria in the request.
 
describeEvaluationsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InternalServerException - An error on the server occurred when trying to process a request.DescribeEvaluationsResult describeEvaluations()
DescribeMLModelsResult describeMLModels(DescribeMLModelsRequest describeMLModelsRequest)
 Returns a list of MLModel that match the search criteria in
 the request.
 
describeMLModelsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.InternalServerException - An error on the server occurred when trying to process a request.DescribeMLModelsResult describeMLModels()
DescribeTagsResult describeTags(DescribeTagsRequest describeTagsRequest)
Describes one or more of the tags for your Amazon ML object.
describeTagsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.GetBatchPredictionResult getBatchPrediction(GetBatchPredictionRequest getBatchPredictionRequest)
 Returns a BatchPrediction that includes detailed metadata,
 status, and data file information for a Batch Prediction
 request.
 
getBatchPredictionRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.GetDataSourceResult getDataSource(GetDataSourceRequest getDataSourceRequest)
 Returns a DataSource that includes metadata and data file
 information, as well as the current status of the DataSource
 .
 
 GetDataSource provides results in normal or verbose format.
 The verbose format adds the schema description and the list of files
 pointed to by the DataSource to the normal format.
 
getDataSourceRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.GetEvaluationResult getEvaluation(GetEvaluationRequest getEvaluationRequest)
 Returns an Evaluation that includes metadata as well as the
 current status of the Evaluation.
 
getEvaluationRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.GetMLModelResult getMLModel(GetMLModelRequest getMLModelRequest)
 Returns an MLModel that includes detailed metadata, data
 source information, and the current status of the MLModel.
 
 GetMLModel provides results in normal or verbose format.
 
getMLModelRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.PredictResult predict(PredictRequest predictRequest)
 Generates a prediction for the observation using the specified
 ML Model.
 
Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.
predictRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.LimitExceededException - The subscriber exceeded the maximum number of operations. This
         exception can occur when listing objects such as
         DataSource.InternalServerException - An error on the server occurred when trying to process a request.PredictorNotMountedException - The exception is thrown when a predict request is made to an
         unmounted MLModel.UpdateBatchPredictionResult updateBatchPrediction(UpdateBatchPredictionRequest updateBatchPredictionRequest)
 Updates the BatchPredictionName of a
 BatchPrediction.
 
 You can use the GetBatchPrediction operation to view the
 contents of the updated data element.
 
updateBatchPredictionRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.UpdateDataSourceResult updateDataSource(UpdateDataSourceRequest updateDataSourceRequest)
 Updates the DataSourceName of a DataSource.
 
 You can use the GetDataSource operation to view the contents
 of the updated data element.
 
updateDataSourceRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.UpdateEvaluationResult updateEvaluation(UpdateEvaluationRequest updateEvaluationRequest)
 Updates the EvaluationName of an Evaluation.
 
 You can use the GetEvaluation operation to view the contents
 of the updated data element.
 
updateEvaluationRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.UpdateMLModelResult updateMLModel(UpdateMLModelRequest updateMLModelRequest)
 Updates the MLModelName and the ScoreThreshold
 of an MLModel.
 
 You can use the GetMLModel operation to view the contents of
 the updated data element.
 
updateMLModelRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an
         invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.void shutdown()
ResponseMetadata getCachedResponseMetadata(AmazonWebServiceRequest request)
Response metadata is only cached for a limited period of time, so if you need to access this extra diagnostic information for an executed request, you should use this method to retrieve it as soon as possible after executing a request.
request - The originally executed request.Copyright © 2013 Amazon Web Services, Inc. All Rights Reserved.