the name of the topic, and doubles as the unique identifier for the model in the models database
marks the time at which the model was generated.
the number of partitions used for the topic when wasp creates it
the number of replicas used for the topic when wasp creates it
field specifies the format to use when encoding/decoding data to/from messages, allowed values are: avro, plaintext, json, binary
optionally specify a field whose contents will be used as a message key when
writing to Kafka. The field must be of type string or binary. The original
field will be left as-is, so you schema must handle it
(or you can use valueFieldsNames
).
allows you to optionally specify a field whose contents will be used
as message headers when writing to Kafka. The field must contain
an array of non-null objects which must have a non-null field
headerKey
of type string and a field headerValue
of type binary.
The original field will be left as-is, so your
schema must handle it (or you can use valueFieldsNames
).
allows you to specify a list of field names to be used to filter
the fields that get passed to the value encoding; with this you can
filter out fields that you don't need in the value, obviating the need
to handle them in the schema. This is especially useful when specifying
the keyFieldName
or headersFieldName
. For the avro and json topic
data type this is optional; for the plaintext and binary topic data types
this field is mandatory and the list must contain a single value field
name that has the proper type (string for plaintext and binary for binary).
if a schema registry should be used or not to handle the schema evolution (it makes sense only for avro message datatype)
the Avro schema to use when encoding the value, for plaintext and binary this field is ignored. For json and avro the field names need to match 1:1 with the valueFieldsNames or the schema output of the strategy
to use to compress messages
subject strategy to use when registering the schema to the schema registry for the schema registry implementations that support it. This property makes sense only for avro and only if useAvroSchemaManager is set to true
the schema to be used to encode the key as avro
marks the time at which the model was generated.
allows you to optionally specify a field whose contents will be used as message headers when writing to Kafka.
allows you to optionally specify a field whose contents will be used
as message headers when writing to Kafka. The field must contain
an array of non-null objects which must have a non-null field
headerKey
of type string and a field headerValue
of type binary.
The original field will be left as-is, so your
schema must handle it (or you can use valueFieldsNames
).
optionally specify a field whose contents will be used as a message key when writing to Kafka.
optionally specify a field whose contents will be used as a message key when
writing to Kafka. The field must be of type string or binary. The original
field will be left as-is, so you schema must handle it
(or you can use valueFieldsNames
).
the schema to be used to encode the key as avro
the name of the topic, and doubles as the unique identifier for the model in the models database
the name of the topic, and doubles as the unique identifier for the model in the models database
the number of partitions used for the topic when wasp creates it
the number of replicas used for the topic when wasp creates it
the Avro schema to use when encoding the value, for plaintext and binary this field is ignored.
the Avro schema to use when encoding the value, for plaintext and binary this field is ignored. For json and avro the field names need to match 1:1 with the valueFieldsNames or the schema output of the strategy
subject strategy to use when registering the schema to the schema registry for the schema registry implementations that support it.
subject strategy to use when registering the schema to the schema registry for the schema registry implementations that support it. This property makes sense only for avro and only if useAvroSchemaManager is set to true
to use to compress messages
field specifies the format to use when encoding/decoding data to/from messages, allowed values are: avro, plaintext, json, binary
if a schema registry should be used or not to handle the schema evolution (it makes sense only for avro message datatype)
allows you to specify a list of field names to be used to filter the fields that get passed to the value encoding; with this you can filter out fields that you don't need in the value, obviating the need to handle them in the schema.
allows you to specify a list of field names to be used to filter
the fields that get passed to the value encoding; with this you can
filter out fields that you don't need in the value, obviating the need
to handle them in the schema. This is especially useful when specifying
the keyFieldName
or headersFieldName
. For the avro and json topic
data type this is optional; for the plaintext and binary topic data types
this field is mandatory and the list must contain a single value field
name that has the proper type (string for plaintext and binary for binary).
A model for a topic, that is, a message queue of some sort. Right now this means just Kafka topics.
the name of the topic, and doubles as the unique identifier for the model in the models database
marks the time at which the model was generated.
the number of partitions used for the topic when wasp creates it
the number of replicas used for the topic when wasp creates it
field specifies the format to use when encoding/decoding data to/from messages, allowed values are: avro, plaintext, json, binary
optionally specify a field whose contents will be used as a message key when writing to Kafka. The field must be of type string or binary. The original field will be left as-is, so you schema must handle it (or you can use
valueFieldsNames
).allows you to optionally specify a field whose contents will be used as message headers when writing to Kafka. The field must contain an array of non-null objects which must have a non-null field
headerKey
of type string and a fieldheaderValue
of type binary. The original field will be left as-is, so your schema must handle it (or you can usevalueFieldsNames
).allows you to specify a list of field names to be used to filter the fields that get passed to the value encoding; with this you can filter out fields that you don't need in the value, obviating the need to handle them in the schema. This is especially useful when specifying the
keyFieldName
orheadersFieldName
. For the avro and json topic data type this is optional; for the plaintext and binary topic data types this field is mandatory and the list must contain a single value field name that has the proper type (string for plaintext and binary for binary).if a schema registry should be used or not to handle the schema evolution (it makes sense only for avro message datatype)
the Avro schema to use when encoding the value, for plaintext and binary this field is ignored. For json and avro the field names need to match 1:1 with the valueFieldsNames or the schema output of the strategy
to use to compress messages
subject strategy to use when registering the schema to the schema registry for the schema registry implementations that support it. This property makes sense only for avro and only if useAvroSchemaManager is set to true
the schema to be used to encode the key as avro