@ThreadSafe public class AmazonMachineLearningClient extends AmazonWebServiceClient implements AmazonMachineLearning
Definition of the public APIs exposed by Amazon Machine Learning
LOGGING_AWS_REQUEST_METRICENDPOINT_PREFIX| Constructor and Description | 
|---|
| AmazonMachineLearningClient()Constructs a new client to invoke service methods on Amazon Machine Learning. | 
| AmazonMachineLearningClient(AWSCredentials awsCredentials)Constructs a new client to invoke service methods on Amazon Machine Learning using the specified AWS account
 credentials. | 
| AmazonMachineLearningClient(AWSCredentials awsCredentials,
                           ClientConfiguration clientConfiguration)Constructs a new client to invoke service methods on Amazon Machine Learning using the specified AWS account
 credentials and client configuration options. | 
| AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider)Constructs a new client to invoke service methods on Amazon Machine Learning using the specified AWS account
 credentials provider. | 
| AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider,
                           ClientConfiguration clientConfiguration)Constructs a new client to invoke service methods on Amazon Machine Learning using the specified AWS account
 credentials provider and client configuration options. | 
| AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider,
                           ClientConfiguration clientConfiguration,
                           RequestMetricCollector requestMetricCollector)Constructs a new client to invoke service methods on Amazon Machine Learning using the specified AWS account
 credentials provider, client configuration options, and request metric collector. | 
| AmazonMachineLearningClient(ClientConfiguration clientConfiguration)Constructs a new client to invoke service methods on Amazon Machine Learning. | 
| Modifier and Type | Method and Description | 
|---|---|
| AddTagsResult | addTags(AddTagsRequest addTagsRequest)
 Adds one or more tags to an object, up to a limit of 10. | 
| CreateBatchPredictionResult | createBatchPrediction(CreateBatchPredictionRequest createBatchPredictionRequest)
 Generates predictions for a group of observations. | 
| CreateDataSourceFromRDSResult | createDataSourceFromRDS(CreateDataSourceFromRDSRequest createDataSourceFromRDSRequest)
 Creates a  DataSourceobject from an  Amazon Relational Database
 Service (Amazon RDS). | 
| CreateDataSourceFromRedshiftResult | createDataSourceFromRedshift(CreateDataSourceFromRedshiftRequest createDataSourceFromRedshiftRequest)
 Creates a  DataSourcefrom a database hosted on an Amazon Redshift cluster. | 
| CreateDataSourceFromS3Result | createDataSourceFromS3(CreateDataSourceFromS3Request createDataSourceFromS3Request)
 Creates a  DataSourceobject. | 
| CreateEvaluationResult | createEvaluation(CreateEvaluationRequest createEvaluationRequest)
 Creates a new  Evaluationof anMLModel. | 
| CreateMLModelResult | createMLModel(CreateMLModelRequest createMLModelRequest)
 Creates a new  MLModelusing theDataSourceand the recipe as information sources. | 
| CreateRealtimeEndpointResult | createRealtimeEndpoint(CreateRealtimeEndpointRequest createRealtimeEndpointRequest)
 Creates a real-time endpoint for the  MLModel. | 
| DeleteBatchPredictionResult | deleteBatchPrediction(DeleteBatchPredictionRequest deleteBatchPredictionRequest)
 Assigns the DELETED status to a  BatchPrediction, rendering it unusable. | 
| DeleteDataSourceResult | deleteDataSource(DeleteDataSourceRequest deleteDataSourceRequest)
 Assigns the DELETED status to a  DataSource, rendering it unusable. | 
| DeleteEvaluationResult | deleteEvaluation(DeleteEvaluationRequest deleteEvaluationRequest)
 Assigns the  DELETEDstatus to anEvaluation, rendering it unusable. | 
| DeleteMLModelResult | deleteMLModel(DeleteMLModelRequest deleteMLModelRequest)
 Assigns the  DELETEDstatus to anMLModel, rendering it unusable. | 
| DeleteRealtimeEndpointResult | deleteRealtimeEndpoint(DeleteRealtimeEndpointRequest deleteRealtimeEndpointRequest)
 Deletes a real time endpoint of an  MLModel. | 
| DeleteTagsResult | deleteTags(DeleteTagsRequest deleteTagsRequest)
 Deletes the specified tags associated with an ML object. | 
| DescribeBatchPredictionsResult | describeBatchPredictions()Simplified method form for invoking the DescribeBatchPredictions operation. | 
| DescribeBatchPredictionsResult | describeBatchPredictions(DescribeBatchPredictionsRequest describeBatchPredictionsRequest)
 Returns a list of  BatchPredictionoperations that match the search criteria in the request. | 
| DescribeDataSourcesResult | describeDataSources()Simplified method form for invoking the DescribeDataSources operation. | 
| DescribeDataSourcesResult | describeDataSources(DescribeDataSourcesRequest describeDataSourcesRequest)
 Returns a list of  DataSourcethat match the search criteria in the request. | 
| DescribeEvaluationsResult | describeEvaluations()Simplified method form for invoking the DescribeEvaluations operation. | 
| DescribeEvaluationsResult | describeEvaluations(DescribeEvaluationsRequest describeEvaluationsRequest)
 Returns a list of  DescribeEvaluationsthat match the search criteria in the request. | 
| DescribeMLModelsResult | describeMLModels()Simplified method form for invoking the DescribeMLModels operation. | 
| DescribeMLModelsResult | describeMLModels(DescribeMLModelsRequest describeMLModelsRequest)
 Returns a list of  MLModelthat match the search criteria in the request. | 
| DescribeTagsResult | describeTags(DescribeTagsRequest describeTagsRequest)
 Describes one or more of the tags for your Amazon ML object. | 
| GetBatchPredictionResult | getBatchPrediction(GetBatchPredictionRequest getBatchPredictionRequest)
 Returns a  BatchPredictionthat includes detailed metadata, status, and data file information for aBatch Predictionrequest. | 
| ResponseMetadata | getCachedResponseMetadata(AmazonWebServiceRequest request)Returns additional metadata for a previously executed successful, request, typically used for debugging issues
 where a service isn't acting as expected. | 
| GetDataSourceResult | getDataSource(GetDataSourceRequest getDataSourceRequest)
 Returns a  DataSourcethat includes metadata and data file information, as well as the current status
 of theDataSource. | 
| GetEvaluationResult | getEvaluation(GetEvaluationRequest getEvaluationRequest)
 Returns an  Evaluationthat includes metadata as well as the current status of theEvaluation. | 
| GetMLModelResult | getMLModel(GetMLModelRequest getMLModelRequest)
 Returns an  MLModelthat includes detailed metadata, data source information, and the current status
 of theMLModel. | 
| PredictResult | predict(PredictRequest predictRequest)
 Generates a prediction for the observation using the specified  ML Model. | 
| UpdateBatchPredictionResult | updateBatchPrediction(UpdateBatchPredictionRequest updateBatchPredictionRequest)
 Updates the  BatchPredictionNameof aBatchPrediction. | 
| UpdateDataSourceResult | updateDataSource(UpdateDataSourceRequest updateDataSourceRequest)
 Updates the  DataSourceNameof aDataSource. | 
| UpdateEvaluationResult | updateEvaluation(UpdateEvaluationRequest updateEvaluationRequest)
 Updates the  EvaluationNameof anEvaluation. | 
| UpdateMLModelResult | updateMLModel(UpdateMLModelRequest updateMLModelRequest)
 Updates the  MLModelNameand theScoreThresholdof anMLModel. | 
| AmazonMachineLearningWaiters | waiters() | 
addRequestHandler, addRequestHandler, configureRegion, getEndpointPrefix, getRequestMetricsCollector, getServiceName, getSignerByURI, getSignerRegionOverride, getTimeOffset, makeImmutable, removeRequestHandler, removeRequestHandler, setEndpoint, setRegion, setServiceNameIntern, setSignerRegionOverride, setTimeOffset, shutdown, withEndpoint, withRegion, withRegion, withTimeOffsetequals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitsetEndpoint, setRegion, shutdownpublic AmazonMachineLearningClient()
All service calls made using this new client object are blocking, and will not return until the service call completes.
DefaultAWSCredentialsProviderChainpublic AmazonMachineLearningClient(ClientConfiguration clientConfiguration)
All service calls made using this new client object are blocking, and will not return until the service call completes.
clientConfiguration - The client configuration options controlling how this client connects to Amazon Machine Learning (ex:
        proxy settings, retry counts, etc.).DefaultAWSCredentialsProviderChainpublic AmazonMachineLearningClient(AWSCredentials awsCredentials)
All service calls made using this new client object are blocking, and will not return until the service call completes.
awsCredentials - The AWS credentials (access key ID and secret key) to use when authenticating with AWS services.public AmazonMachineLearningClient(AWSCredentials awsCredentials, ClientConfiguration clientConfiguration)
All service calls made using this new client object are blocking, and will not return until the service call completes.
awsCredentials - The AWS credentials (access key ID and secret key) to use when authenticating with AWS services.clientConfiguration - The client configuration options controlling how this client connects to Amazon Machine Learning (ex:
        proxy settings, retry counts, etc.).public AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider)
All service calls made using this new client object are blocking, and will not return until the service call completes.
awsCredentialsProvider - The AWS credentials provider which will provide credentials to authenticate requests with AWS services.public AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider, ClientConfiguration clientConfiguration)
All service calls made using this new client object are blocking, and will not return until the service call completes.
awsCredentialsProvider - The AWS credentials provider which will provide credentials to authenticate requests with AWS services.clientConfiguration - The client configuration options controlling how this client connects to Amazon Machine Learning (ex:
        proxy settings, retry counts, etc.).public AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider, ClientConfiguration clientConfiguration, RequestMetricCollector requestMetricCollector)
All service calls made using this new client object are blocking, and will not return until the service call completes.
awsCredentialsProvider - The AWS credentials provider which will provide credentials to authenticate requests with AWS services.clientConfiguration - The client configuration options controlling how this client connects to Amazon Machine Learning (ex:
        proxy settings, retry counts, etc.).requestMetricCollector - optional request metric collectorpublic AddTagsResult addTags(AddTagsRequest addTagsRequest)
 Adds one or more tags to an object, up to a limit of 10. Each tag consists of a key and an optional value. If you
 add a tag using a key that is already associated with the ML object, AddTags updates the tag's
 value.
 
addTags in interface AmazonMachineLearningaddTagsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InvalidTagExceptionTagLimitExceededExceptionResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public CreateBatchPredictionResult createBatchPrediction(CreateBatchPredictionRequest createBatchPredictionRequest)
 Generates predictions for a group of observations. The observations to process exist in one or more data files
 referenced by a DataSource. This operation creates a new BatchPrediction, and uses an
 MLModel and the data files referenced by the DataSource as information sources.
 
 CreateBatchPrediction is an asynchronous operation. In response to
 CreateBatchPrediction, Amazon Machine Learning (Amazon ML) immediately returns and sets the
 BatchPrediction status to PENDING. After the BatchPrediction completes,
 Amazon ML sets the status to COMPLETED.
 
 You can poll for status updates by using the GetBatchPrediction operation and checking the
 Status parameter of the result. After the COMPLETED status appears, the results are
 available in the location specified by the OutputUri parameter.
 
createBatchPrediction in interface AmazonMachineLearningcreateBatchPredictionRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request
         using a parameter that was not present in the original request.public CreateDataSourceFromRDSResult createDataSourceFromRDS(CreateDataSourceFromRDSRequest createDataSourceFromRDSRequest)
 Creates a DataSource object from an  Amazon Relational Database
 Service (Amazon RDS). A DataSource references data that can be used to perform
 CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.
 
 CreateDataSourceFromRDS is an asynchronous operation. In response to
 CreateDataSourceFromRDS, Amazon Machine Learning (Amazon ML) immediately returns and sets the
 DataSource status to PENDING. After the DataSource is created and ready
 for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in
 the COMPLETED or PENDING state can be used only to perform
 >CreateMLModel>, CreateEvaluation, or CreateBatchPrediction
 operations.
 
 If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and
 includes an error message in the Message attribute of the GetDataSource operation
 response.
 
createDataSourceFromRDS in interface AmazonMachineLearningcreateDataSourceFromRDSRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request
         using a parameter that was not present in the original request.public CreateDataSourceFromRedshiftResult createDataSourceFromRedshift(CreateDataSourceFromRedshiftRequest createDataSourceFromRedshiftRequest)
 Creates a DataSource from a database hosted on an Amazon Redshift cluster. A DataSource
 references data that can be used to perform either CreateMLModel, CreateEvaluation, or
 CreateBatchPrediction operations.
 
 CreateDataSourceFromRedshift is an asynchronous operation. In response to
 CreateDataSourceFromRedshift, Amazon Machine Learning (Amazon ML) immediately returns and sets the
 DataSource status to PENDING. After the DataSource is created and ready
 for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in
 COMPLETED or PENDING states can be used to perform only CreateMLModel,
 CreateEvaluation, or CreateBatchPrediction operations.
 
 If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and
 includes an error message in the Message attribute of the GetDataSource operation
 response.
 
 The observations should be contained in the database hosted on an Amazon Redshift cluster and should be specified
 by a SelectSqlQuery query. Amazon ML executes an Unload command in Amazon Redshift to
 transfer the result set of the SelectSqlQuery query to S3StagingLocation.
 
 After the DataSource has been created, it's ready for use in evaluations and batch predictions. If
 you plan to use the DataSource to train an MLModel, the DataSource also
 requires a recipe. A recipe describes how each input variable will be used in training an MLModel.
 Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it
 be combined with another variable or will it be split apart into word combinations? The recipe provides answers
 to these questions.
 
 You can't change an existing datasource, but you can copy and modify the settings from an existing Amazon
 Redshift datasource to create a new datasource. To do so, call GetDataSource for an existing
 datasource and copy the values to a CreateDataSource call. Change the settings that you want to
 change and make sure that all required fields have the appropriate values.
 
createDataSourceFromRedshift in interface AmazonMachineLearningcreateDataSourceFromRedshiftRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request
         using a parameter that was not present in the original request.public CreateDataSourceFromS3Result createDataSourceFromS3(CreateDataSourceFromS3Request createDataSourceFromS3Request)
 Creates a DataSource object. A DataSource references data that can be used to perform
 CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.
 
 CreateDataSourceFromS3 is an asynchronous operation. In response to
 CreateDataSourceFromS3, Amazon Machine Learning (Amazon ML) immediately returns and sets the
 DataSource status to PENDING. After the DataSource has been created and is
 ready for use, Amazon ML sets the Status parameter to COMPLETED.
 DataSource in the COMPLETED or PENDING state can be used to perform only
 CreateMLModel, CreateEvaluation or CreateBatchPrediction operations.
 
 If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and
 includes an error message in the Message attribute of the GetDataSource operation
 response.
 
 The observation data used in a DataSource should be ready to use; that is, it should have a
 consistent structure, and missing data values should be kept to a minimum. The observation data must reside in
 one or more .csv files in an Amazon Simple Storage Service (Amazon S3) location, along with a schema that
 describes the data items by name and type. The same schema must be used for all of the data files referenced by
 the DataSource.
 
 After the DataSource has been created, it's ready to use in evaluations and batch predictions. If
 you plan to use the DataSource to train an MLModel, the DataSource also
 needs a recipe. A recipe describes how each input variable will be used in training an MLModel. Will
 the variable be included or excluded from training? Will the variable be manipulated; for example, will it be
 combined with another variable or will it be split apart into word combinations? The recipe provides answers to
 these questions.
 
createDataSourceFromS3 in interface AmazonMachineLearningcreateDataSourceFromS3Request - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request
         using a parameter that was not present in the original request.public CreateEvaluationResult createEvaluation(CreateEvaluationRequest createEvaluationRequest)
 Creates a new Evaluation of an MLModel. An MLModel is evaluated on a set
 of observations associated to a DataSource. Like a DataSource for an
 MLModel, the DataSource for an Evaluation contains values for the
 Target Variable. The Evaluation compares the predicted result for each observation to
 the actual outcome and provides a summary so that you know how effective the MLModel functions on
 the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or
 MulticlassAvgFScore based on the corresponding MLModelType: BINARY,
 REGRESSION or MULTICLASS.
 
 CreateEvaluation is an asynchronous operation. In response to CreateEvaluation, Amazon
 Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING. After
 the Evaluation is created and ready for use, Amazon ML sets the status to COMPLETED.
 
 You can use the GetEvaluation operation to check progress of the evaluation during the creation
 operation.
 
createEvaluation in interface AmazonMachineLearningcreateEvaluationRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request
         using a parameter that was not present in the original request.public CreateMLModelResult createMLModel(CreateMLModelRequest createMLModelRequest)
 Creates a new MLModel using the DataSource and the recipe as information sources.
 
 An MLModel is nearly immutable. Users can update only the MLModelName and the
 ScoreThreshold in an MLModel without creating a new MLModel.
 
 CreateMLModel is an asynchronous operation. In response to CreateMLModel, Amazon
 Machine Learning (Amazon ML) immediately returns and sets the MLModel status to PENDING
 . After the MLModel has been created and ready is for use, Amazon ML sets the status to
 COMPLETED.
 
 You can use the GetMLModel operation to check the progress of the MLModel during the
 creation operation.
 
 CreateMLModel requires a DataSource with computed statistics, which can be created by
 setting ComputeStatistics to true in CreateDataSourceFromRDS,
 CreateDataSourceFromS3, or CreateDataSourceFromRedshift operations.
 
createMLModel in interface AmazonMachineLearningcreateMLModelRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InternalServerException - An error on the server occurred when trying to process a request.IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request
         using a parameter that was not present in the original request.public CreateRealtimeEndpointResult createRealtimeEndpoint(CreateRealtimeEndpointRequest createRealtimeEndpointRequest)
 Creates a real-time endpoint for the MLModel. The endpoint contains the URI of the
 MLModel; that is, the location to send real-time prediction requests for the specified
 MLModel.
 
createRealtimeEndpoint in interface AmazonMachineLearningcreateRealtimeEndpointRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public DeleteBatchPredictionResult deleteBatchPrediction(DeleteBatchPredictionRequest deleteBatchPredictionRequest)
 Assigns the DELETED status to a BatchPrediction, rendering it unusable.
 
 After using the DeleteBatchPrediction operation, you can use the GetBatchPrediction operation
 to verify that the status of the BatchPrediction changed to DELETED.
 
 Caution: The result of the DeleteBatchPrediction operation is irreversible.
 
deleteBatchPrediction in interface AmazonMachineLearningdeleteBatchPredictionRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public DeleteDataSourceResult deleteDataSource(DeleteDataSourceRequest deleteDataSourceRequest)
 Assigns the DELETED status to a DataSource, rendering it unusable.
 
 After using the DeleteDataSource operation, you can use the GetDataSource operation to verify
 that the status of the DataSource changed to DELETED.
 
 Caution: The results of the DeleteDataSource operation are irreversible.
 
deleteDataSource in interface AmazonMachineLearningdeleteDataSourceRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public DeleteEvaluationResult deleteEvaluation(DeleteEvaluationRequest deleteEvaluationRequest)
 Assigns the DELETED status to an Evaluation, rendering it unusable.
 
 After invoking the DeleteEvaluation operation, you can use the GetEvaluation operation
 to verify that the status of the Evaluation changed to DELETED.
 
 The results of the DeleteEvaluation operation are irreversible.
 
deleteEvaluation in interface AmazonMachineLearningdeleteEvaluationRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public DeleteMLModelResult deleteMLModel(DeleteMLModelRequest deleteMLModelRequest)
 Assigns the DELETED status to an MLModel, rendering it unusable.
 
 After using the DeleteMLModel operation, you can use the GetMLModel operation to verify
 that the status of the MLModel changed to DELETED.
 
 Caution: The result of the DeleteMLModel operation is irreversible.
 
deleteMLModel in interface AmazonMachineLearningdeleteMLModelRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public DeleteRealtimeEndpointResult deleteRealtimeEndpoint(DeleteRealtimeEndpointRequest deleteRealtimeEndpointRequest)
 Deletes a real time endpoint of an MLModel.
 
deleteRealtimeEndpoint in interface AmazonMachineLearningdeleteRealtimeEndpointRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public DeleteTagsResult deleteTags(DeleteTagsRequest deleteTagsRequest)
Deletes the specified tags associated with an ML object. After this operation is complete, you can't recover deleted tags.
If you specify a tag that doesn't exist, Amazon ML ignores it.
deleteTags in interface AmazonMachineLearningdeleteTagsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InvalidTagExceptionResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public DescribeBatchPredictionsResult describeBatchPredictions(DescribeBatchPredictionsRequest describeBatchPredictionsRequest)
 Returns a list of BatchPrediction operations that match the search criteria in the request.
 
describeBatchPredictions in interface AmazonMachineLearningdescribeBatchPredictionsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InternalServerException - An error on the server occurred when trying to process a request.public DescribeBatchPredictionsResult describeBatchPredictions()
AmazonMachineLearningdescribeBatchPredictions in interface AmazonMachineLearningAmazonMachineLearning.describeBatchPredictions(DescribeBatchPredictionsRequest)public DescribeDataSourcesResult describeDataSources(DescribeDataSourcesRequest describeDataSourcesRequest)
 Returns a list of DataSource that match the search criteria in the request.
 
describeDataSources in interface AmazonMachineLearningdescribeDataSourcesRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InternalServerException - An error on the server occurred when trying to process a request.public DescribeDataSourcesResult describeDataSources()
AmazonMachineLearningdescribeDataSources in interface AmazonMachineLearningAmazonMachineLearning.describeDataSources(DescribeDataSourcesRequest)public DescribeEvaluationsResult describeEvaluations(DescribeEvaluationsRequest describeEvaluationsRequest)
 Returns a list of DescribeEvaluations that match the search criteria in the request.
 
describeEvaluations in interface AmazonMachineLearningdescribeEvaluationsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InternalServerException - An error on the server occurred when trying to process a request.public DescribeEvaluationsResult describeEvaluations()
AmazonMachineLearningdescribeEvaluations in interface AmazonMachineLearningAmazonMachineLearning.describeEvaluations(DescribeEvaluationsRequest)public DescribeMLModelsResult describeMLModels(DescribeMLModelsRequest describeMLModelsRequest)
 Returns a list of MLModel that match the search criteria in the request.
 
describeMLModels in interface AmazonMachineLearningdescribeMLModelsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.InternalServerException - An error on the server occurred when trying to process a request.public DescribeMLModelsResult describeMLModels()
AmazonMachineLearningdescribeMLModels in interface AmazonMachineLearningAmazonMachineLearning.describeMLModels(DescribeMLModelsRequest)public DescribeTagsResult describeTags(DescribeTagsRequest describeTagsRequest)
Describes one or more of the tags for your Amazon ML object.
describeTags in interface AmazonMachineLearningdescribeTagsRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public GetBatchPredictionResult getBatchPrediction(GetBatchPredictionRequest getBatchPredictionRequest)
 Returns a BatchPrediction that includes detailed metadata, status, and data file information for a
 Batch Prediction request.
 
getBatchPrediction in interface AmazonMachineLearninggetBatchPredictionRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public GetDataSourceResult getDataSource(GetDataSourceRequest getDataSourceRequest)
 Returns a DataSource that includes metadata and data file information, as well as the current status
 of the DataSource.
 
 GetDataSource provides results in normal or verbose format. The verbose format adds the schema
 description and the list of files pointed to by the DataSource to the normal format.
 
getDataSource in interface AmazonMachineLearninggetDataSourceRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public GetEvaluationResult getEvaluation(GetEvaluationRequest getEvaluationRequest)
 Returns an Evaluation that includes metadata as well as the current status of the
 Evaluation.
 
getEvaluation in interface AmazonMachineLearninggetEvaluationRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public GetMLModelResult getMLModel(GetMLModelRequest getMLModelRequest)
 Returns an MLModel that includes detailed metadata, data source information, and the current status
 of the MLModel.
 
 GetMLModel provides results in normal or verbose format.
 
getMLModel in interface AmazonMachineLearninggetMLModelRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public PredictResult predict(PredictRequest predictRequest)
 Generates a prediction for the observation using the specified ML Model.
 
Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.
predict in interface AmazonMachineLearningpredictRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.LimitExceededException - The subscriber exceeded the maximum number of operations. This exception can occur when listing objects
         such as DataSource.InternalServerException - An error on the server occurred when trying to process a request.PredictorNotMountedException - The exception is thrown when a predict request is made to an unmounted MLModel.public UpdateBatchPredictionResult updateBatchPrediction(UpdateBatchPredictionRequest updateBatchPredictionRequest)
 Updates the BatchPredictionName of a BatchPrediction.
 
 You can use the GetBatchPrediction operation to view the contents of the updated data element.
 
updateBatchPrediction in interface AmazonMachineLearningupdateBatchPredictionRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public UpdateDataSourceResult updateDataSource(UpdateDataSourceRequest updateDataSourceRequest)
 Updates the DataSourceName of a DataSource.
 
 You can use the GetDataSource operation to view the contents of the updated data element.
 
updateDataSource in interface AmazonMachineLearningupdateDataSourceRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public UpdateEvaluationResult updateEvaluation(UpdateEvaluationRequest updateEvaluationRequest)
 Updates the EvaluationName of an Evaluation.
 
 You can use the GetEvaluation operation to view the contents of the updated data element.
 
updateEvaluation in interface AmazonMachineLearningupdateEvaluationRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public UpdateMLModelResult updateMLModel(UpdateMLModelRequest updateMLModelRequest)
 Updates the MLModelName and the ScoreThreshold of an MLModel.
 
 You can use the GetMLModel operation to view the contents of the updated data element.
 
updateMLModel in interface AmazonMachineLearningupdateMLModelRequest - InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.ResourceNotFoundException - A specified resource cannot be located.InternalServerException - An error on the server occurred when trying to process a request.public ResponseMetadata getCachedResponseMetadata(AmazonWebServiceRequest request)
Response metadata is only cached for a limited period of time, so if you need to access this extra diagnostic information for an executed request, you should use this method to retrieve it as soon as possible after executing the request.
getCachedResponseMetadata in interface AmazonMachineLearningrequest - The originally executed requestpublic AmazonMachineLearningWaiters waiters()
waiters in interface AmazonMachineLearningCopyright © 2013 Amazon Web Services, Inc. All Rights Reserved.