Module Smaws_Client_DynamoDB

DynamoDB client library built on EIO.

Types

type attribute_value =
  1. | BOOL of bool
  2. | NULL of bool
  3. | L of attribute_value list
  4. | M of (string * attribute_value) list
  5. | BS of bytes list
  6. | NS of string list
  7. | SS of string list
  8. | B of bytes
  9. | N of string
  10. | S of string
type put_request = {
  1. item : (string * attribute_value) list;
    (*

    A map of attribute name to attribute values, representing the primary key of an item to be processed by PutItem. All of the table's primary key attributes must be specified, and their data types must match those of the table's key schema. If any attributes are present in the item that are part of an index key schema for the table, their types must match the index key schema.

    *)
}

Represents a request to perform a PutItem operation on an item.

type delete_request = {
  1. key : (string * attribute_value) list;
    (*

    A map of attribute name to attribute values, representing the primary key of the item to delete. All of the table's primary key attributes must be specified, and their data types must match those of the table's key schema.

    *)
}

Represents a request to perform a DeleteItem operation on an item.

type write_request = {
  1. delete_request : delete_request option;
    (*

    A request to perform a DeleteItem operation.

    *)
  2. put_request : put_request option;
    (*

    A request to perform a PutItem operation.

    *)
}

Represents an operation to perform - either DeleteItem or PutItem. You can only request one of these operations, not both, in a single WriteRequest. If you do need to perform both of these operations, you need to provide two separate WriteRequest objects.

type time_to_live_specification = {
  1. attribute_name : string;
    (*

    The name of the TTL attribute used to store the expiration time for items in the table.

    *)
  2. enabled : bool;
    (*

    Indicates whether TTL is to be enabled (true) or disabled (false) on the table.

    *)
}

Represents the settings used to enable or disable Time to Live (TTL) for the specified table.

type update_time_to_live_output = {
  1. time_to_live_specification : time_to_live_specification option;
    (*

    Represents the output of an UpdateTimeToLive operation.

    *)
}
type update_time_to_live_input = {
  1. time_to_live_specification : time_to_live_specification;
    (*

    Represents the settings used to enable or disable Time to Live for the specified table.

    *)
  2. table_name : string;
    (*

    The name of the table to be configured. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}

Represents the input of an UpdateTimeToLive operation.

type resource_not_found_exception = {
  1. message : string option;
    (*

    The resource which is being requested does not exist.

    *)
}

The operation tried to access a nonexistent table or index. The resource might not be specified correctly, or its status might not be ACTIVE.

type resource_in_use_exception = {
  1. message : string option;
    (*

    The resource which is being attempted to be changed is in use.

    *)
}

The operation conflicts with the resource's availability. For example, you attempted to recreate an existing table, or tried to delete a table currently in the CREATING state.

type limit_exceeded_exception = {
  1. message : string option;
    (*

    Too many operations for a given subscriber.

    *)
}

There is no limit to the number of daily on-demand backups that can be taken.

For most purposes, up to 500 simultaneous table operations are allowed per account. These operations include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, and RestoreTableToPointInTime.

When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.

When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.

There is a soft account quota of 2,500 tables.

GetRecords was called with a value of more than 1000 for the limit request parameter.

More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.

type invalid_endpoint_exception = {
  1. message : string option;
}
type internal_server_error = {
  1. message : string option;
    (*

    The server encountered an internal error trying to fulfill the request.

    *)
}

An error occurred on the server side.

type table_status =
  1. | ARCHIVED
  2. | ARCHIVING
  3. | INACCESSIBLE_ENCRYPTION_CREDENTIALS
  4. | ACTIVE
  5. | DELETING
  6. | UPDATING
  7. | CREATING
type index_status =
  1. | ACTIVE
  2. | DELETING
  3. | UPDATING
  4. | CREATING
type auto_scaling_target_tracking_scaling_policy_configuration_description = {
  1. target_value : float;
    (*

    The target value for the metric. The range is 8.515920e-109 to 1.174271e+108 (Base 10) or 2e-360 to 2e360 (Base 2).

    *)
  2. scale_out_cooldown : int option;
    (*

    The amount of time, in seconds, after a scale out activity completes before another scale out activity can start. While the cooldown period is in effect, the capacity that has been added by the previous scale out event that initiated the cooldown is calculated as part of the desired capacity for the next scale out. You should continuously (but not excessively) scale out.

    *)
  3. scale_in_cooldown : int option;
    (*

    The amount of time, in seconds, after a scale in activity completes before another scale in activity can start. The cooldown period is used to block subsequent scale in requests until it has expired. You should scale in conservatively to protect your application's availability. However, if another alarm triggers a scale out policy during the cooldown period after a scale-in, application auto scaling scales out your scalable target immediately.

    *)
  4. disable_scale_in : bool option;
    (*

    Indicates whether scale in by the target tracking policy is disabled. If the value is true, scale in is disabled and the target tracking policy won't remove capacity from the scalable resource. Otherwise, scale in is enabled and the target tracking policy can remove capacity from the scalable resource. The default value is false.

    *)
}

Represents the properties of a target tracking scaling policy.

type auto_scaling_policy_description = {
  1. target_tracking_scaling_policy_configuration : auto_scaling_target_tracking_scaling_policy_configuration_description option;
    (*

    Represents a target tracking scaling policy configuration.

    *)
  2. policy_name : string option;
    (*

    The name of the scaling policy.

    *)
}

Represents the properties of the scaling policy.

type auto_scaling_settings_description = {
  1. scaling_policies : auto_scaling_policy_description list option;
    (*

    Information about the scaling policies.

    *)
  2. auto_scaling_role_arn : string option;
    (*

    Role ARN used for configuring the auto scaling policy.

    *)
  3. auto_scaling_disabled : bool option;
    (*

    Disabled auto scaling for this global table or global secondary index.

    *)
  4. maximum_units : int option;
    (*

    The maximum capacity units that a global table or global secondary index should be scaled up to.

    *)
  5. minimum_units : int option;
    (*

    The minimum capacity units that a global table or global secondary index should be scaled down to.

    *)
}

Represents the auto scaling settings for a global table or global secondary index.

type replica_global_secondary_index_auto_scaling_description = {
  1. provisioned_write_capacity_auto_scaling_settings : auto_scaling_settings_description option;
  2. provisioned_read_capacity_auto_scaling_settings : auto_scaling_settings_description option;
  3. index_status : index_status option;
    (*

    The current state of the replica global secondary index:

    • CREATING - The index is being created.
    • UPDATING - The table/index configuration is being updated. The table/index remains available for data operations when UPDATING
    • DELETING - The index is being deleted.
    • ACTIVE - The index is ready for use.
    *)
  4. index_name : string option;
    (*

    The name of the global secondary index.

    *)
}

Represents the auto scaling configuration for a replica global secondary index.

type replica_status =
  1. | INACCESSIBLE_ENCRYPTION_CREDENTIALS
  2. | REGION_DISABLED
  3. | ACTIVE
  4. | DELETING
  5. | UPDATING
  6. | CREATION_FAILED
  7. | CREATING
type replica_auto_scaling_description = {
  1. replica_status : replica_status option;
    (*

    The current state of the replica:

    • CREATING - The replica is being created.
    • UPDATING - The replica is being updated.
    • DELETING - The replica is being deleted.
    • ACTIVE - The replica is ready for use.
    *)
  2. replica_provisioned_write_capacity_auto_scaling_settings : auto_scaling_settings_description option;
  3. replica_provisioned_read_capacity_auto_scaling_settings : auto_scaling_settings_description option;
  4. global_secondary_indexes : replica_global_secondary_index_auto_scaling_description list option;
    (*

    Replica-specific global secondary index auto scaling settings.

    *)
  5. region_name : string option;
    (*

    The Region where the replica exists.

    *)
}

Represents the auto scaling settings of the replica.

type table_auto_scaling_description = {
  1. replicas : replica_auto_scaling_description list option;
    (*

    Represents replicas of the global table.

    *)
  2. table_status : table_status option;
    (*

    The current state of the table:

    • CREATING - The table is being created.
    • UPDATING - The table is being updated.
    • DELETING - The table is being deleted.
    • ACTIVE - The table is ready for use.
    *)
  3. table_name : string option;
    (*

    The name of the table.

    *)
}

Represents the auto scaling configuration for a global table.

type update_table_replica_auto_scaling_output = {
  1. table_auto_scaling_description : table_auto_scaling_description option;
    (*

    Returns information about the auto scaling settings of a table with replicas.

    *)
}
type auto_scaling_target_tracking_scaling_policy_configuration_update = {
  1. target_value : float;
    (*

    The target value for the metric. The range is 8.515920e-109 to 1.174271e+108 (Base 10) or 2e-360 to 2e360 (Base 2).

    *)
  2. scale_out_cooldown : int option;
    (*

    The amount of time, in seconds, after a scale out activity completes before another scale out activity can start. While the cooldown period is in effect, the capacity that has been added by the previous scale out event that initiated the cooldown is calculated as part of the desired capacity for the next scale out. You should continuously (but not excessively) scale out.

    *)
  3. scale_in_cooldown : int option;
    (*

    The amount of time, in seconds, after a scale in activity completes before another scale in activity can start. The cooldown period is used to block subsequent scale in requests until it has expired. You should scale in conservatively to protect your application's availability. However, if another alarm triggers a scale out policy during the cooldown period after a scale-in, application auto scaling scales out your scalable target immediately.

    *)
  4. disable_scale_in : bool option;
    (*

    Indicates whether scale in by the target tracking policy is disabled. If the value is true, scale in is disabled and the target tracking policy won't remove capacity from the scalable resource. Otherwise, scale in is enabled and the target tracking policy can remove capacity from the scalable resource. The default value is false.

    *)
}

Represents the settings of a target tracking scaling policy that will be modified.

type auto_scaling_policy_update = {
  1. target_tracking_scaling_policy_configuration : auto_scaling_target_tracking_scaling_policy_configuration_update;
    (*

    Represents a target tracking scaling policy configuration.

    *)
  2. policy_name : string option;
    (*

    The name of the scaling policy.

    *)
}

Represents the auto scaling policy to be modified.

type auto_scaling_settings_update = {
  1. scaling_policy_update : auto_scaling_policy_update option;
    (*

    The scaling policy to apply for scaling target global table or global secondary index capacity units.

    *)
  2. auto_scaling_role_arn : string option;
    (*

    Role ARN used for configuring auto scaling policy.

    *)
  3. auto_scaling_disabled : bool option;
    (*

    Disabled auto scaling for this global table or global secondary index.

    *)
  4. maximum_units : int option;
    (*

    The maximum capacity units that a global table or global secondary index should be scaled up to.

    *)
  5. minimum_units : int option;
    (*

    The minimum capacity units that a global table or global secondary index should be scaled down to.

    *)
}

Represents the auto scaling settings to be modified for a global table or global secondary index.

type global_secondary_index_auto_scaling_update = {
  1. provisioned_write_capacity_auto_scaling_update : auto_scaling_settings_update option;
  2. index_name : string option;
    (*

    The name of the global secondary index.

    *)
}

Represents the auto scaling settings of a global secondary index for a global table that will be modified.

type replica_global_secondary_index_auto_scaling_update = {
  1. provisioned_read_capacity_auto_scaling_update : auto_scaling_settings_update option;
  2. index_name : string option;
    (*

    The name of the global secondary index.

    *)
}

Represents the auto scaling settings of a global secondary index for a replica that will be modified.

type replica_auto_scaling_update = {
  1. replica_provisioned_read_capacity_auto_scaling_update : auto_scaling_settings_update option;
  2. replica_global_secondary_index_updates : replica_global_secondary_index_auto_scaling_update list option;
    (*

    Represents the auto scaling settings of global secondary indexes that will be modified.

    *)
  3. region_name : string;
    (*

    The Region where the replica exists.

    *)
}

Represents the auto scaling settings of a replica that will be modified.

type update_table_replica_auto_scaling_input = {
  1. replica_updates : replica_auto_scaling_update list option;
    (*

    Represents the auto scaling settings of replicas of the table that will be modified.

    *)
  2. provisioned_write_capacity_auto_scaling_update : auto_scaling_settings_update option;
  3. table_name : string;
    (*

    The name of the global table to be updated. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
  4. global_secondary_index_updates : global_secondary_index_auto_scaling_update list option;
    (*

    Represents the auto scaling settings of the global secondary indexes of the replica to be updated.

    *)
}
type scalar_attribute_type =
  1. | B
  2. | N
  3. | S
type attribute_definition = {
  1. attribute_type : scalar_attribute_type;
    (*

    The data type for the attribute, where:

    • S - the attribute is of type String
    • N - the attribute is of type Number
    • B - the attribute is of type Binary
    *)
  2. attribute_name : string;
    (*

    A name for the attribute.

    *)
}

Represents an attribute for describing the schema for the table and indexes.

type key_type =
  1. | RANGE
  2. | HASH
type key_schema_element = {
  1. key_type : key_type;
    (*

    The role that this key attribute will assume:

    • HASH - partition key
    • RANGE - sort key

    The partition key of an item is also known as its hash attribute. The term "hash attribute" derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.

    The sort key of an item is also known as its range attribute. The term "range attribute" derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

    *)
  2. attribute_name : string;
    (*

    The name of a key attribute.

    *)
}

Represents a single element of a key schema. A key schema specifies the attributes that make up the primary key of a table, or the key attributes of an index.

A KeySchemaElement represents exactly one attribute of the primary key. For example, a simple primary key would be represented by one KeySchemaElement (for the partition key). A composite primary key would require one KeySchemaElement for the partition key, and another KeySchemaElement for the sort key.

A KeySchemaElement must be a scalar, top-level attribute (not a nested attribute). The data type must be one of String, Number, or Binary. The attribute cannot be nested within a List or a Map.

type provisioned_throughput_description = {
  1. write_capacity_units : int option;
    (*

    The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException.

    *)
  2. read_capacity_units : int option;
    (*

    The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. Eventually consistent reads require less effort than strongly consistent reads, so a setting of 50 ReadCapacityUnits per second provides 100 eventually consistent ReadCapacityUnits per second.

    *)
  3. number_of_decreases_today : int option;
    (*

    The number of provisioned throughput decreases for this table during this UTC calendar day. For current maximums on provisioned throughput decreases, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.

    *)
  4. last_decrease_date_time : float option;
    (*

    The date and time of the last provisioned throughput decrease for this table.

    *)
  5. last_increase_date_time : float option;
    (*

    The date and time of the last provisioned throughput increase for this table.

    *)
}

Represents the provisioned throughput settings for the table, consisting of read and write capacity units, along with data about increases and decreases.

type billing_mode =
  1. | PAY_PER_REQUEST
  2. | PROVISIONED
type billing_mode_summary = {
  1. last_update_to_pay_per_request_date_time : float option;
    (*

    Represents the time when PAY_PER_REQUEST was last set as the read/write capacity mode.

    *)
  2. billing_mode : billing_mode option;
    (*

    Controls how you are charged for read and write throughput and how you manage capacity. This setting can be changed later.

    • PROVISIONED - Sets the read/write capacity mode to PROVISIONED. We recommend using PROVISIONED for predictable workloads.
    • PAY_PER_REQUEST - Sets the read/write capacity mode to PAY_PER_REQUEST. We recommend using PAY_PER_REQUEST for unpredictable workloads.
    *)
}

Contains the details for the read/write capacity mode. This page talks about PROVISIONED and PAY_PER_REQUEST billing modes. For more information about these modes, see Read/write capacity mode.

You may need to switch to on-demand mode at least once in order to return a BillingModeSummary response.

type projection_type =
  1. | INCLUDE
  2. | KEYS_ONLY
  3. | ALL
type projection = {
  1. non_key_attributes : string list option;
    (*

    Represents the non-key attribute names which will be projected into the index.

    For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.

    *)
  2. projection_type : projection_type option;
    (*

    The set of attributes that are projected into the index:

    • KEYS_ONLY - Only the index and primary keys are projected into the index.
    • INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify.
    • ALL - All of the table attributes are projected into the index.

    When using the DynamoDB console, ALL is selected by default.

    *)
}

Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.

type local_secondary_index_description = {
  1. index_arn : string option;
    (*

    The Amazon Resource Name (ARN) that uniquely identifies the index.

    *)
  2. item_count : int option;
    (*

    The number of items in the specified index. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.

    *)
  3. index_size_bytes : int option;
    (*

    The total size of the specified index, in bytes. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.

    *)
  4. projection : projection option;
    (*

    Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.

    *)
  5. key_schema : key_schema_element list option;
    (*

    The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types:

    • HASH - partition key
    • RANGE - sort key

    The partition key of an item is also known as its hash attribute. The term "hash attribute" derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.

    The sort key of an item is also known as its range attribute. The term "range attribute" derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

    *)
  6. index_name : string option;
    (*

    Represents the name of the local secondary index.

    *)
}

Represents the properties of a local secondary index.

type on_demand_throughput = {
  1. max_write_request_units : int option;
    (*

    Maximum number of write request units for the specified table.

    To specify a maximum OnDemandThroughput on your table, set the value of MaxWriteRequestUnits as greater than or equal to 1. To remove the maximum OnDemandThroughput that is currently set on your table, set the value of MaxWriteRequestUnits to -1.

    *)
  2. max_read_request_units : int option;
    (*

    Maximum number of read request units for the specified table.

    To specify a maximum OnDemandThroughput on your table, set the value of MaxReadRequestUnits as greater than or equal to 1. To remove the maximum OnDemandThroughput that is currently set on your table, set the value of MaxReadRequestUnits to -1.

    *)
}

Sets the maximum number of read and write units for the specified on-demand table. If you use this parameter, you must specify MaxReadRequestUnits, MaxWriteRequestUnits, or both.

type global_secondary_index_description = {
  1. on_demand_throughput : on_demand_throughput option;
    (*

    The maximum number of read and write units for the specified global secondary index. If you use this parameter, you must specify MaxReadRequestUnits, MaxWriteRequestUnits, or both.

    *)
  2. index_arn : string option;
    (*

    The Amazon Resource Name (ARN) that uniquely identifies the index.

    *)
  3. item_count : int option;
    (*

    The number of items in the specified index. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.

    *)
  4. index_size_bytes : int option;
    (*

    The total size of the specified index, in bytes. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.

    *)
  5. provisioned_throughput : provisioned_throughput_description option;
    (*

    Represents the provisioned throughput settings for the specified global secondary index.

    For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.

    *)
  6. backfilling : bool option;
    (*

    Indicates whether the index is currently backfilling. Backfilling is the process of reading items from the table and determining whether they can be added to the index. (Not all items will qualify: For example, a partition key cannot have any duplicate values.) If an item can be added to the index, DynamoDB will do so. After all items have been processed, the backfilling operation is complete and Backfilling is false.

    You can delete an index that is being created during the Backfilling phase when IndexStatus is set to CREATING and Backfilling is true. You can't delete the index that is being created when IndexStatus is set to CREATING and Backfilling is false.

    For indexes that were created during a CreateTable operation, the Backfilling attribute does not appear in the DescribeTable output.

    *)
  7. index_status : index_status option;
    (*

    The current state of the global secondary index:

    • CREATING - The index is being created.
    • UPDATING - The index is being updated.
    • DELETING - The index is being deleted.
    • ACTIVE - The index is ready for use.
    *)
  8. projection : projection option;
    (*

    Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.

    *)
  9. key_schema : key_schema_element list option;
    (*

    The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types:

    • HASH - partition key
    • RANGE - sort key

    The partition key of an item is also known as its hash attribute. The term "hash attribute" derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.

    The sort key of an item is also known as its range attribute. The term "range attribute" derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

    *)
  10. index_name : string option;
    (*

    The name of the global secondary index.

    *)
}

Represents the properties of a global secondary index.

type stream_view_type =
  1. | KEYS_ONLY
  2. | NEW_AND_OLD_IMAGES
  3. | OLD_IMAGE
  4. | NEW_IMAGE
type stream_specification = {
  1. stream_view_type : stream_view_type option;
    (*

    When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are:

    • KEYS_ONLY - Only the key attributes of the modified item are written to the stream.
    • NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream.
    • OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream.
    • NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
    *)
  2. stream_enabled : bool;
    (*

    Indicates whether DynamoDB Streams is enabled (true) or disabled (false) on the table.

    *)
}

Represents the DynamoDB Streams configuration for a table in DynamoDB.

type provisioned_throughput_override = {
  1. read_capacity_units : int option;
    (*

    Replica-specific read capacity units. If not specified, uses the source table's read capacity settings.

    *)
}

Replica-specific provisioned throughput settings. If not specified, uses the source table's provisioned throughput settings.

type on_demand_throughput_override = {
  1. max_read_request_units : int option;
    (*

    Maximum number of read request units for the specified replica table.

    *)
}

Overrides the on-demand throughput settings for this replica table. If you don't specify a value for this parameter, it uses the source table's on-demand throughput settings.

type replica_global_secondary_index_description = {
  1. on_demand_throughput_override : on_demand_throughput_override option;
    (*

    Overrides the maximum on-demand throughput for the specified global secondary index in the specified replica table.

    *)
  2. provisioned_throughput_override : provisioned_throughput_override option;
    (*

    If not described, uses the source table GSI's read capacity settings.

    *)
  3. index_name : string option;
    (*

    The name of the global secondary index.

    *)
}

Represents the properties of a replica global secondary index.

type table_class =
  1. | STANDARD_INFREQUENT_ACCESS
  2. | STANDARD
type table_class_summary = {
  1. last_update_date_time : float option;
    (*

    The date and time at which the table class was last updated.

    *)
  2. table_class : table_class option;
    (*

    The table class of the specified table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.

    *)
}

Contains details of the table class.

type replica_description = {
  1. replica_table_class_summary : table_class_summary option;
  2. replica_inaccessible_date_time : float option;
    (*

    The time at which the replica was first detected as inaccessible. To determine cause of inaccessibility check the ReplicaStatus property.

    *)
  3. global_secondary_indexes : replica_global_secondary_index_description list option;
    (*

    Replica-specific global secondary index settings.

    *)
  4. on_demand_throughput_override : on_demand_throughput_override option;
    (*

    Overrides the maximum on-demand throughput settings for the specified replica table.

    *)
  5. provisioned_throughput_override : provisioned_throughput_override option;
    (*

    Replica-specific provisioned throughput. If not described, uses the source table's provisioned throughput settings.

    *)
  6. kms_master_key_id : string option;
    (*

    The KMS key of the replica that will be used for KMS encryption.

    *)
  7. replica_status_percent_progress : string option;
    (*

    Specifies the progress of a Create, Update, or Delete action on the replica as a percentage.

    *)
  8. replica_status_description : string option;
    (*

    Detailed information about the replica status.

    *)
  9. replica_status : replica_status option;
    (*

    The current state of the replica:

    • CREATING - The replica is being created.
    • UPDATING - The replica is being updated.
    • DELETING - The replica is being deleted.
    • ACTIVE - The replica is ready for use.
    • REGION_DISABLED - The replica is inaccessible because the Amazon Web Services Region has been disabled.

      If the Amazon Web Services Region remains inaccessible for more than 20 hours, DynamoDB will remove this replica from the replication group. The replica will not be deleted and replication will stop from and to this region.

    • INACCESSIBLE_ENCRYPTION_CREDENTIALS - The KMS key used to encrypt the table is inaccessible.

      If the KMS key remains inaccessible for more than 20 hours, DynamoDB will remove this replica from the replication group. The replica will not be deleted and replication will stop from and to this region.

    *)
  10. region_name : string option;
    (*

    The name of the Region.

    *)
}

Contains the details of the replica.

type restore_summary = {
  1. restore_in_progress : bool;
    (*

    Indicates if a restore is in progress or not.

    *)
  2. restore_date_time : float;
    (*

    Point in time or source backup time.

    *)
  3. source_table_arn : string option;
    (*

    The ARN of the source table of the backup that is being restored.

    *)
  4. source_backup_arn : string option;
    (*

    The Amazon Resource Name (ARN) of the backup from which the table was restored.

    *)
}

Contains details for the restore.

type sse_status =
  1. | UPDATING
  2. | DISABLED
  3. | DISABLING
  4. | ENABLED
  5. | ENABLING
type sse_type =
  1. | KMS
  2. | AES256
type sse_description = {
  1. inaccessible_encryption_date_time : float option;
    (*

    Indicates the time, in UNIX epoch date format, when DynamoDB detected that the table's KMS key was inaccessible. This attribute will automatically be cleared when DynamoDB detects that the table's KMS key is accessible again. DynamoDB will initiate the table archival process when table's KMS key remains inaccessible for more than seven days from this date.

    *)
  2. kms_master_key_arn : string option;
    (*

    The KMS key ARN used for the KMS encryption.

    *)
  3. sse_type : sse_type option;
    (*

    Server-side encryption type. The only supported value is:

    • KMS - Server-side encryption that uses Key Management Service. The key is stored in your account and is managed by KMS (KMS charges apply).
    *)
  4. status : sse_status option;
    (*

    Represents the current state of server-side encryption. The only supported values are:

    • ENABLED - Server-side encryption is enabled.
    • UPDATING - Server-side encryption is being updated.
    *)
}

The description of the server-side encryption status on the specified table.

type archival_summary = {
  1. archival_backup_arn : string option;
    (*

    The Amazon Resource Name (ARN) of the backup the table was archived to, when applicable in the archival reason. If you wish to restore this backup to the same table name, you will need to delete the original table.

    *)
  2. archival_reason : string option;
    (*

    The reason DynamoDB archived the table. Currently, the only possible value is:

    • INACCESSIBLE_ENCRYPTION_CREDENTIALS - The table was archived due to the table's KMS key being inaccessible for more than seven days. An On-Demand backup was created at the archival time.
    *)
  3. archival_date_time : float option;
    (*

    The date and time when table archival was initiated by DynamoDB, in UNIX epoch time format.

    *)
}

Contains details of a table archival operation.

type table_description = {
  1. on_demand_throughput : on_demand_throughput option;
    (*

    The maximum number of read and write units for the specified on-demand table. If you use this parameter, you must specify MaxReadRequestUnits, MaxWriteRequestUnits, or both.

    *)
  2. deletion_protection_enabled : bool option;
    (*

    Indicates whether deletion protection is enabled (true) or disabled (false) on the table.

    *)
  3. table_class_summary : table_class_summary option;
    (*

    Contains details of the table class.

    *)
  4. archival_summary : archival_summary option;
    (*

    Contains information about the table archive.

    *)
  5. sse_description : sse_description option;
    (*

    The description of the server-side encryption status on the specified table.

    *)
  6. restore_summary : restore_summary option;
    (*

    Contains details for the restore.

    *)
  7. replicas : replica_description list option;
    (*

    Represents replicas of the table.

    *)
  8. global_table_version : string option;
    (*

    Represents the version of global tables in use, if the table is replicated across Amazon Web Services Regions.

    *)
  9. latest_stream_arn : string option;
    (*

    The Amazon Resource Name (ARN) that uniquely identifies the latest stream for this table.

    *)
  10. latest_stream_label : string option;
    (*

    A timestamp, in ISO 8601 format, for this stream.

    Note that LatestStreamLabel is not a unique identifier for the stream, because it is possible that a stream from another table might have the same timestamp. However, the combination of the following three elements is guaranteed to be unique:

    • Amazon Web Services customer ID
    • Table name
    • StreamLabel
    *)
  11. stream_specification : stream_specification option;
    (*

    The current DynamoDB Streams configuration for the table.

    *)
  12. global_secondary_indexes : global_secondary_index_description list option;
    (*

    The global secondary indexes, if any, on the table. Each index is scoped to a given partition key value. Each element is composed of:

    • Backfilling - If true, then the index is currently in the backfilling phase. Backfilling occurs only when a new global secondary index is added to the table. It is the process by which DynamoDB populates the new index with data from the table. (This attribute does not appear for indexes that were created during a CreateTable operation.)

      You can delete an index that is being created during the Backfilling phase when IndexStatus is set to CREATING and Backfilling is true. You can't delete the index that is being created when IndexStatus is set to CREATING and Backfilling is false. (This attribute does not appear for indexes that were created during a CreateTable operation.)

    • IndexName - The name of the global secondary index.
    • IndexSizeBytes - The total size of the global secondary index, in bytes. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.
    • IndexStatus - The current status of the global secondary index:

      • CREATING - The index is being created.
      • UPDATING - The index is being updated.
      • DELETING - The index is being deleted.
      • ACTIVE - The index is ready for use.
    • ItemCount - The number of items in the global secondary index. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.
    • KeySchema - Specifies the complete index key schema. The attribute names in the key schema must be between 1 and 255 characters (inclusive). The key schema must begin with the same partition key as the table.
    • Projection - Specifies attributes that are copied (projected) from the table into the index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Each attribute specification is composed of:

      • ProjectionType - One of the following:

        • KEYS_ONLY - Only the index and primary keys are projected into the index.
        • INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify.
        • ALL - All of the table attributes are projected into the index.
      • NonKeyAttributes - A list of one or more non-key attribute names that are projected into the secondary index. The total count of attributes provided in NonKeyAttributes, summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
    • ProvisionedThroughput - The provisioned throughput settings for the global secondary index, consisting of read and write capacity units, along with data about increases and decreases.

    If the table is in the DELETING state, no information about indexes will be returned.

    *)
  13. local_secondary_indexes : local_secondary_index_description list option;
    (*

    Represents one or more local secondary indexes on the table. Each index is scoped to a given partition key value. Tables with one or more local secondary indexes are subject to an item collection size limit, where the amount of data within a given item collection cannot exceed 10 GB. Each element is composed of:

    • IndexName - The name of the local secondary index.
    • KeySchema - Specifies the complete index key schema. The attribute names in the key schema must be between 1 and 255 characters (inclusive). The key schema must begin with the same partition key as the table.
    • Projection - Specifies attributes that are copied (projected) from the table into the index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Each attribute specification is composed of:

      • ProjectionType - One of the following:

        • KEYS_ONLY - Only the index and primary keys are projected into the index.
        • INCLUDE - Only the specified table attributes are projected into the index. The list of projected attributes is in NonKeyAttributes.
        • ALL - All of the table attributes are projected into the index.
      • NonKeyAttributes - A list of one or more non-key attribute names that are projected into the secondary index. The total count of attributes provided in NonKeyAttributes, summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
    • IndexSizeBytes - Represents the total size of the index, in bytes. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.
    • ItemCount - Represents the number of items in the index. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.

    If the table is in the DELETING state, no information about indexes will be returned.

    *)
  14. billing_mode_summary : billing_mode_summary option;
    (*

    Contains the details for the read/write capacity mode.

    *)
  15. table_id : string option;
    (*

    Unique identifier for the table for which the backup was created.

    *)
  16. table_arn : string option;
    (*

    The Amazon Resource Name (ARN) that uniquely identifies the table.

    *)
  17. item_count : int option;
    (*

    The number of items in the specified table. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.

    *)
  18. table_size_bytes : int option;
    (*

    The total size of the specified table, in bytes. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.

    *)
  19. provisioned_throughput : provisioned_throughput_description option;
    (*

    The provisioned throughput settings for the table, consisting of read and write capacity units, along with data about increases and decreases.

    *)
  20. creation_date_time : float option;
    (*

    The date and time when the table was created, in UNIX epoch time format.

    *)
  21. table_status : table_status option;
    (*

    The current state of the table:

    • CREATING - The table is being created.
    • UPDATING - The table/index configuration is being updated. The table/index remains available for data operations when UPDATING.
    • DELETING - The table is being deleted.
    • ACTIVE - The table is ready for use.
    • INACCESSIBLE_ENCRYPTION_CREDENTIALS - The KMS key used to encrypt the table in inaccessible. Table operations may fail due to failure to use the KMS key. DynamoDB will initiate the table archival process when a table's KMS key remains inaccessible for more than seven days.
    • ARCHIVING - The table is being archived. Operations are not allowed until archival is complete.
    • ARCHIVED - The table has been archived. See the ArchivalReason for more information.
    *)
  22. key_schema : key_schema_element list option;
    (*

    The primary key structure for the table. Each KeySchemaElement consists of:

    • AttributeName - The name of the attribute.
    • KeyType - The role of the attribute:

      • HASH - partition key
      • RANGE - sort key

      The partition key of an item is also known as its hash attribute. The term "hash attribute" derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.

      The sort key of an item is also known as its range attribute. The term "range attribute" derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

    For more information about primary keys, see Primary Key in the Amazon DynamoDB Developer Guide.

    *)
  23. table_name : string option;
    (*

    The name of the table.

    *)
  24. attribute_definitions : attribute_definition list option;
    (*

    An array of AttributeDefinition objects. Each of these objects describes one attribute in the table and index key schema.

    Each AttributeDefinition object in this array is composed of:

    • AttributeName - The name of the attribute.
    • AttributeType - The data type for the attribute.
    *)
}

Represents the properties of a table.

type update_table_output = {
  1. table_description : table_description option;
    (*

    Represents the properties of the table.

    *)
}

Represents the output of an UpdateTable operation.

type provisioned_throughput = {
  1. write_capacity_units : int;
    (*

    The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide.

    If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.

    *)
  2. read_capacity_units : int;
    (*

    The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide.

    If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.

    *)
}

Represents the provisioned throughput settings for a specified table or index. The settings can be modified using the UpdateTable operation.

For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.

type update_global_secondary_index_action = {
  1. on_demand_throughput : on_demand_throughput option;
    (*

    Updates the maximum number of read and write units for the specified global secondary index. If you use this parameter, you must specify MaxReadRequestUnits, MaxWriteRequestUnits, or both.

    *)
  2. provisioned_throughput : provisioned_throughput option;
    (*

    Represents the provisioned throughput settings for the specified global secondary index.

    For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.

    *)
  3. index_name : string;
    (*

    The name of the global secondary index to be updated.

    *)
}

Represents the new provisioned throughput settings to be applied to a global secondary index.

type create_global_secondary_index_action = {
  1. on_demand_throughput : on_demand_throughput option;
    (*

    The maximum number of read and write units for the global secondary index being created. If you use this parameter, you must specify MaxReadRequestUnits, MaxWriteRequestUnits, or both.

    *)
  2. provisioned_throughput : provisioned_throughput option;
    (*

    Represents the provisioned throughput settings for the specified global secondary index.

    For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.

    *)
  3. projection : projection;
    (*

    Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.

    *)
  4. key_schema : key_schema_element list;
    (*

    The key schema for the global secondary index.

    *)
  5. index_name : string;
    (*

    The name of the global secondary index to be created.

    *)
}

Represents a new global secondary index to be added to an existing table.

type delete_global_secondary_index_action = {
  1. index_name : string;
    (*

    The name of the global secondary index to be deleted.

    *)
}

Represents a global secondary index to be deleted from an existing table.

type global_secondary_index_update = {
  1. delete : delete_global_secondary_index_action option;
    (*

    The name of an existing global secondary index to be removed.

    *)
  2. create : create_global_secondary_index_action option;
    (*

    The parameters required for creating a global secondary index on an existing table:

    • IndexName
    • KeySchema
    • AttributeDefinitions
    • Projection
    • ProvisionedThroughput
    *)
  3. update : update_global_secondary_index_action option;
    (*

    The name of an existing global secondary index, along with new provisioned throughput settings to be applied to that index.

    *)
}

Represents one of the following:

  • A new global secondary index to be added to an existing table.
  • New provisioned throughput parameters for an existing global secondary index.
  • An existing global secondary index to be removed from an existing table.
type sse_specification = {
  1. kms_master_key_id : string option;
    (*

    The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.

    *)
  2. sse_type : sse_type option;
    (*

    Server-side encryption type. The only supported value is:

    • KMS - Server-side encryption that uses Key Management Service. The key is stored in your account and is managed by KMS (KMS charges apply).
    *)
  3. enabled : bool option;
    (*

    Indicates whether server-side encryption is done using an Amazon Web Services managed key or an Amazon Web Services owned key. If enabled (true), server-side encryption type is set to KMS and an Amazon Web Services managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to Amazon Web Services owned key.

    *)
}

Represents the settings used to enable server-side encryption.

type replica_global_secondary_index = {
  1. on_demand_throughput_override : on_demand_throughput_override option;
    (*

    Overrides the maximum on-demand throughput settings for the specified global secondary index in the specified replica table.

    *)
  2. provisioned_throughput_override : provisioned_throughput_override option;
    (*

    Replica table GSI-specific provisioned throughput. If not specified, uses the source table GSI's read capacity settings.

    *)
  3. index_name : string;
    (*

    The name of the global secondary index.

    *)
}

Represents the properties of a replica global secondary index.

type create_replication_group_member_action = {
  1. table_class_override : table_class option;
    (*

    Replica-specific table class. If not specified, uses the source table's table class.

    *)
  2. global_secondary_indexes : replica_global_secondary_index list option;
    (*

    Replica-specific global secondary index settings.

    *)
  3. on_demand_throughput_override : on_demand_throughput_override option;
    (*

    The maximum on-demand throughput settings for the specified replica table being created. You can only modify MaxReadRequestUnits, because you can't modify MaxWriteRequestUnits for individual replica tables.

    *)
  4. provisioned_throughput_override : provisioned_throughput_override option;
    (*

    Replica-specific provisioned throughput. If not specified, uses the source table's provisioned throughput settings.

    *)
  5. kms_master_key_id : string option;
    (*

    The KMS key that should be used for KMS encryption in the new replica. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB KMS key alias/aws/dynamodb.

    *)
  6. region_name : string;
    (*

    The Region where the new replica will be created.

    *)
}

Represents a replica to be created.

type update_replication_group_member_action = {
  1. table_class_override : table_class option;
    (*

    Replica-specific table class. If not specified, uses the source table's table class.

    *)
  2. global_secondary_indexes : replica_global_secondary_index list option;
    (*

    Replica-specific global secondary index settings.

    *)
  3. on_demand_throughput_override : on_demand_throughput_override option;
    (*

    Overrides the maximum on-demand throughput for the replica table.

    *)
  4. provisioned_throughput_override : provisioned_throughput_override option;
    (*

    Replica-specific provisioned throughput. If not specified, uses the source table's provisioned throughput settings.

    *)
  5. kms_master_key_id : string option;
    (*

    The KMS key of the replica that should be used for KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB KMS key alias/aws/dynamodb.

    *)
  6. region_name : string;
    (*

    The Region where the replica exists.

    *)
}

Represents a replica to be modified.

type delete_replication_group_member_action = {
  1. region_name : string;
    (*

    The Region where the replica exists.

    *)
}

Represents a replica to be deleted.

type replication_group_update = {
  1. delete : delete_replication_group_member_action option;
    (*

    The parameters required for deleting a replica for the table.

    *)
  2. update : update_replication_group_member_action option;
    (*

    The parameters required for updating a replica for the table.

    *)
  3. create : create_replication_group_member_action option;
    (*

    The parameters required for creating a replica for the table.

    *)
}

Represents one of the following:

  • A new replica to be added to an existing regional table or global table. This request invokes the CreateTableReplica action in the destination Region.
  • New parameters for an existing replica. This request invokes the UpdateTable action in the destination Region.
  • An existing replica to be deleted. The request invokes the DeleteTableReplica action in the destination Region, deleting the replica and all if its items in the destination Region.

When you manually remove a table or global table replica, you do not automatically remove any associated scalable targets, scaling policies, or CloudWatch alarms.

type update_table_input = {
  1. on_demand_throughput : on_demand_throughput option;
    (*

    Updates the maximum number of read and write units for the specified table in on-demand capacity mode. If you use this parameter, you must specify MaxReadRequestUnits, MaxWriteRequestUnits, or both.

    *)
  2. deletion_protection_enabled : bool option;
    (*

    Indicates whether deletion protection is to be enabled (true) or disabled (false) on the table.

    *)
  3. table_class : table_class option;
    (*

    The table class of the table to be updated. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.

    *)
  4. replica_updates : replication_group_update list option;
    (*

    A list of replica update actions (create, delete, or update) for the table.

    For global tables, this property only applies to global tables using Version 2019.11.21 (Current version).

    *)
  5. sse_specification : sse_specification option;
    (*

    The new server-side encryption settings for the specified table.

    *)
  6. stream_specification : stream_specification option;
    (*

    Represents the DynamoDB Streams configuration for the table.

    You receive a ValidationException if you try to enable a stream on a table that already has a stream, or if you try to disable a stream on a table that doesn't have a stream.

    *)
  7. global_secondary_index_updates : global_secondary_index_update list option;
    (*

    An array of one or more global secondary indexes for the table. For each index in the array, you can request one action:

    • Create - add a new global secondary index to the table.
    • Update - modify the provisioned throughput settings of an existing global secondary index.
    • Delete - remove a global secondary index from the table.

    You can create or delete only one global secondary index per UpdateTable operation.

    For more information, see Managing Global Secondary Indexes in the Amazon DynamoDB Developer Guide.

    *)
  8. provisioned_throughput : provisioned_throughput option;
    (*

    The new provisioned throughput settings for the specified table or index.

    *)
  9. billing_mode : billing_mode option;
    (*

    Controls how you are charged for read and write throughput and how you manage capacity. When switching from pay-per-request to provisioned capacity, initial provisioned capacity values must be set. The initial provisioned capacity values are estimated based on the consumed read and write capacity of your table and global secondary indexes over the past 30 minutes.

    • PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned capacity mode.
    • PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-demand capacity mode.
    *)
  10. table_name : string;
    (*

    The name of the table to be updated. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
  11. attribute_definitions : attribute_definition list option;
    (*

    An array of attributes that describe the key schema for the table and indexes. If you are adding a new global secondary index to the table, AttributeDefinitions must include the key element(s) of the new index.

    *)
}

Represents the input of an UpdateTable operation.

type destination_status =
  1. | UPDATING
  2. | ENABLE_FAILED
  3. | DISABLED
  4. | DISABLING
  5. | ACTIVE
  6. | ENABLING
type approximate_creation_date_time_precision =
  1. | MICROSECOND
  2. | MILLISECOND
type update_kinesis_streaming_configuration = {
  1. approximate_creation_date_time_precision : approximate_creation_date_time_precision option;
    (*

    Enables updating the precision of Kinesis data stream timestamp.

    *)
}

Enables updating the configuration for Kinesis Streaming.

type update_kinesis_streaming_destination_output = {
  1. update_kinesis_streaming_configuration : update_kinesis_streaming_configuration option;
    (*

    The command to update the Kinesis streaming destination configuration.

    *)
  2. destination_status : destination_status option;
    (*

    The status of the attempt to update the Kinesis streaming destination output.

    *)
  3. stream_arn : string option;
    (*

    The ARN for the Kinesis stream input.

    *)
  4. table_name : string option;
    (*

    The table name for the Kinesis streaming destination output.

    *)
}
type update_kinesis_streaming_destination_input = {
  1. update_kinesis_streaming_configuration : update_kinesis_streaming_configuration option;
    (*

    The command to update the Kinesis stream configuration.

    *)
  2. stream_arn : string;
    (*

    The Amazon Resource Name (ARN) for the Kinesis stream input.

    *)
  3. table_name : string;
    (*

    The table name for the Kinesis streaming destination input. You can also provide the ARN of the table in this parameter.

    *)
}
type capacity = {
  1. capacity_units : float option;
    (*

    The total number of capacity units consumed on a table or an index.

    *)
  2. write_capacity_units : float option;
    (*

    The total number of write capacity units consumed on a table or an index.

    *)
  3. read_capacity_units : float option;
    (*

    The total number of read capacity units consumed on a table or an index.

    *)
}

Represents the amount of provisioned throughput capacity consumed on a table or an index.

type consumed_capacity = {
  1. global_secondary_indexes : (string * capacity) list option;
    (*

    The amount of throughput consumed on each global index affected by the operation.

    *)
  2. local_secondary_indexes : (string * capacity) list option;
    (*

    The amount of throughput consumed on each local index affected by the operation.

    *)
  3. table : capacity option;
    (*

    The amount of throughput consumed on the table affected by the operation.

    *)
  4. write_capacity_units : float option;
    (*

    The total number of write capacity units consumed by the operation.

    *)
  5. read_capacity_units : float option;
    (*

    The total number of read capacity units consumed by the operation.

    *)
  6. capacity_units : float option;
    (*

    The total number of capacity units consumed by the operation.

    *)
  7. table_name : string option;
    (*

    The name of the table that was affected by the operation. If you had specified the Amazon Resource Name (ARN) of a table in the input, you'll see the table ARN in the response.

    *)
}

The capacity units consumed by an operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. ConsumedCapacity is only returned if the request asked for it. For more information, see Provisioned capacity mode in the Amazon DynamoDB Developer Guide.

type item_collection_metrics = {
  1. size_estimate_range_g_b : float list option;
    (*

    An estimate of item collection size, in gigabytes. This value is a two-element array containing a lower bound and an upper bound for the estimate. The estimate includes the size of all the items in the table, plus the size of all attributes projected into all of the local secondary indexes on that table. Use this estimate to measure whether a local secondary index is approaching its size limit.

    The estimate is subject to change over time; therefore, do not rely on the precision or accuracy of the estimate.

    *)
  2. item_collection_key : (string * attribute_value) list option;
    (*

    The partition key value of the item collection. This value is the same as the partition key value of the item.

    *)
}

Information about item collections, if any, that were affected by the operation. ItemCollectionMetrics is only returned if the request asked for it. If the table does not have any local secondary indexes, this information is not returned in the response.

type update_item_output = {
  1. item_collection_metrics : item_collection_metrics option;
    (*

    Information about item collections, if any, that were affected by the UpdateItem operation. ItemCollectionMetrics is only returned if the ReturnItemCollectionMetrics parameter was specified. If the table does not have any local secondary indexes, this information is not returned in the response.

    Each ItemCollectionMetrics element consists of:

    • ItemCollectionKey - The partition key value of the item collection. This is the same as the partition key value of the item itself.
    • SizeEstimateRangeGB - An estimate of item collection size, in gigabytes. This value is a two-element array containing a lower bound and an upper bound for the estimate. The estimate includes the size of all the items in the table, plus the size of all attributes projected into all of the local secondary indexes on that table. Use this estimate to measure whether a local secondary index is approaching its size limit.

      The estimate is subject to change over time; therefore, do not rely on the precision or accuracy of the estimate.

    *)
  2. consumed_capacity : consumed_capacity option;
    (*

    The capacity units consumed by the UpdateItem operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. ConsumedCapacity is only returned if the ReturnConsumedCapacity parameter was specified. For more information, see Capacity unity consumption for write operations in the Amazon DynamoDB Developer Guide.

    *)
  3. attributes : (string * attribute_value) list option;
    (*

    A map of attribute values as they appear before or after the UpdateItem operation, as determined by the ReturnValues parameter.

    The Attributes map is only present if the update was successful and ReturnValues was specified as something other than NONE in the request. Each element represents one attribute.

    *)
}

Represents the output of an UpdateItem operation.

type attribute_action =
  1. | DELETE
  2. | PUT
  3. | ADD
type attribute_value_update = {
  1. action : attribute_action option;
    (*

    Specifies how to perform the update. Valid values are PUT (default), DELETE, and ADD. The behavior depends on whether the specified primary key already exists in the table.

    If an item with the specified Key is found in the table:

    • PUT - Adds the specified attribute to the item. If the attribute already exists, it is replaced by the new value.
    • DELETE - If no value is specified, the attribute and its value are removed from the item. The data type of the specified value must match the existing value's data type.

      If a set of values is specified, then those values are subtracted from the old set. For example, if the attribute value was the set [a,b,c] and the DELETE action specified [a,c], then the final attribute value would be [b]. Specifying an empty set is an error.

    • ADD - If the attribute does not already exist, then the attribute and its values are added to the item. If the attribute does exist, then the behavior of ADD depends on the data type of the attribute:

      • If the existing attribute is a number, and if Value is also a number, then the Value is mathematically added to the existing attribute. If Value is a negative number, then it is subtracted from the existing attribute.

        If you use ADD to increment or decrement a number value for an item that doesn't exist before the update, DynamoDB uses 0 as the initial value.

        In addition, if you use ADD to update an existing item, and intend to increment or decrement an attribute value which does not yet exist, DynamoDB uses 0 as the initial value. For example, suppose that the item you want to update does not yet have an attribute named itemcount, but you decide to ADD the number 3 to this attribute anyway, even though it currently does not exist. DynamoDB will create the itemcount attribute, set its initial value to 0, and finally add 3 to it. The result will be a new itemcount attribute in the item, with a value of 3.

      • If the existing data type is a set, and if the Value is also a set, then the Value is added to the existing set. (This is a set operation, not mathematical addition.) For example, if the attribute value was the set [1,2], and the ADD action specified [3], then the final attribute value would be [1,2,3]. An error occurs if an Add action is specified for a set attribute and the attribute type specified does not match the existing set type.

        Both sets must have the same primitive data type. For example, if the existing data type is a set of strings, the Value must also be a set of strings. The same holds true for number sets and binary sets.

      This action is only valid for an existing attribute whose data type is number or is a set. Do not use ADD for any other data types.

    If no item with the specified Key is found:

    • PUT - DynamoDB creates a new item with the specified primary key, and then adds the attribute.
    • DELETE - Nothing happens; there is no attribute to delete.
    • ADD - DynamoDB creates a new item with the supplied primary key and number (or set) for the attribute value. The only data types allowed are number, number set, string set or binary set.
    *)
  2. value : attribute_value option;
    (*

    Represents the data for an attribute.

    Each attribute value is described as a name-value pair. The name is the data type, and the value is the data itself.

    For more information, see Data Types in the Amazon DynamoDB Developer Guide.

    *)
}

For the UpdateItem operation, represents the attributes to be modified, the action to perform on each, and the new value for each.

You cannot use UpdateItem to update any primary key attributes. Instead, you will need to delete the item, and then use PutItem to create a new item with new attributes.

Attribute values cannot be null; string and binary type attributes must have lengths greater than zero; and set type attributes must not be empty. Requests with empty values will be rejected with a ValidationException exception.

type comparison_operator =
  1. | BEGINS_WITH
  2. | NOT_CONTAINS
  3. | CONTAINS
  4. | NULL
  5. | NOT_NULL
  6. | BETWEEN
  7. | GT
  8. | GE
  9. | LT
  10. | LE
  11. | IN
  12. | NE
  13. | EQ
type expected_attribute_value = {
  1. attribute_value_list : attribute_value list option;
    (*

    One or more values to evaluate against the supplied attribute. The number of values in the list depends on the ComparisonOperator being used.

    For type Number, value comparisons are numeric.

    String value comparisons for greater than, equals, or less than are based on ASCII character code values. For example, a is greater than A, and a is greater than B. For a list of code values, see http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters.

    For Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary values.

    For information on specifying data types in JSON, see JSON Data Format in the Amazon DynamoDB Developer Guide.

    *)
  2. comparison_operator : comparison_operator option;
    (*

    A comparator for evaluating attributes in the AttributeValueList. For example, equals, greater than, less than, etc.

    The following comparison operators are available:

    EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | BEGINS_WITH | IN | BETWEEN

    The following are descriptions of each comparison operator.

    • EQ : Equal. EQ is supported for all data types, including lists and maps.

      AttributeValueList can contain only one AttributeValue element of type String, Number, Binary, String Set, Number Set, or Binary Set. If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not equal {"NS":["6", "2", "1"]}.

    • NE : Not equal. NE is supported for all data types, including lists and maps.

      AttributeValueList can contain only one AttributeValue of type String, Number, Binary, String Set, Number Set, or Binary Set. If an item contains an AttributeValue of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not equal {"NS":["6", "2", "1"]}.

    • LE : Less than or equal.

      AttributeValueList can contain only one AttributeValue element of type String, Number, or Binary (not a set type). If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.

    • LT : Less than.

      AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set type). If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.

    • GE : Greater than or equal.

      AttributeValueList can contain only one AttributeValue element of type String, Number, or Binary (not a set type). If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.

    • GT : Greater than.

      AttributeValueList can contain only one AttributeValue element of type String, Number, or Binary (not a set type). If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.

    • NOT_NULL : The attribute exists. NOT_NULL is supported for all data types, including lists and maps.

      This operator tests for the existence of an attribute, not its data type. If the data type of attribute "a" is null, and you evaluate it using NOT_NULL, the result is a Boolean true. This result is because the attribute "a" exists; its data type is not relevant to the NOT_NULL comparison operator.

    • NULL : The attribute does not exist. NULL is supported for all data types, including lists and maps.

      This operator tests for the nonexistence of an attribute, not its data type. If the data type of attribute "a" is null, and you evaluate it using NULL, the result is a Boolean false. This is because the attribute "a" exists; its data type is not relevant to the NULL comparison operator.

    • CONTAINS : Checks for a subsequence, or value in a set.

      AttributeValueList can contain only one AttributeValue element of type String, Number, or Binary (not a set type). If the target attribute of the comparison is of type String, then the operator checks for a substring match. If the target attribute of the comparison is of type Binary, then the operator looks for a subsequence of the target that matches the input. If the target attribute of the comparison is a set ("SS", "NS", or "BS"), then the operator evaluates to true if it finds an exact match with any member of the set.

      CONTAINS is supported for lists: When evaluating "a CONTAINS b", "a" can be a list; however, "b" cannot be a set, a map, or a list.

    • NOT_CONTAINS : Checks for absence of a subsequence, or absence of a value in a set.

      AttributeValueList can contain only one AttributeValue element of type String, Number, or Binary (not a set type). If the target attribute of the comparison is a String, then the operator checks for the absence of a substring match. If the target attribute of the comparison is Binary, then the operator checks for the absence of a subsequence of the target that matches the input. If the target attribute of the comparison is a set ("SS", "NS", or "BS"), then the operator evaluates to true if it does not find an exact match with any member of the set.

      NOT_CONTAINS is supported for lists: When evaluating "a NOT CONTAINS b", "a" can be a list; however, "b" cannot be a set, a map, or a list.

    • BEGINS_WITH : Checks for a prefix.

      AttributeValueList can contain only one AttributeValue of type String or Binary (not a Number or a set type). The target attribute of the comparison must be of type String or Binary (not a Number or a set type).

    • IN : Checks for matching elements in a list.

      AttributeValueList can contain one or more AttributeValue elements of type String, Number, or Binary. These attributes are compared against an existing attribute of an item. If any elements of the input are equal to the item attribute, the expression evaluates to true.

    • BETWEEN : Greater than or equal to the first value, and less than or equal to the second value.

      AttributeValueList must contain two AttributeValue elements of the same type, either String, Number, or Binary (not a set type). A target attribute matches if the target value is greater than, or equal to, the first element and less than, or equal to, the second element. If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not compare to {"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}

    *)
  3. exists : bool option;
    (*

    Causes DynamoDB to evaluate the value before attempting a conditional operation:

    • If Exists is true, DynamoDB will check to see if that attribute value already exists in the table. If it is found, then the operation succeeds. If it is not found, the operation fails with a ConditionCheckFailedException.
    • If Exists is false, DynamoDB assumes that the attribute value does not exist in the table. If in fact the value does not exist, then the assumption is valid and the operation succeeds. If the value is found, despite the assumption that it does not exist, the operation fails with a ConditionCheckFailedException.

    The default setting for Exists is true. If you supply a Value all by itself, DynamoDB assumes the attribute exists: You don't have to set Exists to true, because it is implied.

    DynamoDB returns a ValidationException if:

    • Exists is true but there is no Value to check. (You expect a value to exist, but don't specify what that value is.)
    • Exists is false but you also provide a Value. (You cannot expect an attribute to have a value, while also expecting it not to exist.)
    *)
  4. value : attribute_value option;
    (*

    Represents the data for the expected attribute.

    Each attribute value is described as a name-value pair. The name is the data type, and the value is the data itself.

    For more information, see Data Types in the Amazon DynamoDB Developer Guide.

    *)
}

Represents a condition to be compared with an attribute value. This condition can be used with DeleteItem, PutItem, or UpdateItem operations; if the comparison evaluates to true, the operation succeeds; if not, the operation fails. You can use ExpectedAttributeValue in one of two different ways:

  • Use AttributeValueList to specify one or more values to compare against an attribute. Use ComparisonOperator to specify how you want to perform the comparison. If the comparison evaluates to true, then the conditional operation succeeds.
  • Use Value to specify a value that DynamoDB will compare against an attribute. If the values match, then ExpectedAttributeValue evaluates to true and the conditional operation succeeds. Optionally, you can also set Exists to false, indicating that you do not expect to find the attribute value in the table. In this case, the conditional operation succeeds only if the comparison evaluates to false.

Value and Exists are incompatible with AttributeValueList and ComparisonOperator. Note that if you use both sets of parameters at once, DynamoDB will return a ValidationException exception.

type conditional_operator =
  1. | OR
  2. | AND
type return_value =
  1. | UPDATED_NEW
  2. | ALL_NEW
  3. | UPDATED_OLD
  4. | ALL_OLD
  5. | NONE
type return_consumed_capacity =
  1. | NONE
  2. | TOTAL
  3. | INDEXES

Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:

  • INDEXES - The response includes the aggregate ConsumedCapacity for the operation, together with ConsumedCapacity for each table and secondary index that was accessed.

    Note that some operations, such as GetItem and BatchGetItem, do not access any indexes at all. In these cases, specifying INDEXES will only return ConsumedCapacity information for table(s).

  • TOTAL - The response includes only the aggregate ConsumedCapacity for the operation.
  • NONE - No ConsumedCapacity details are included in the response.
type return_item_collection_metrics =
  1. | NONE
  2. | SIZE
type return_values_on_condition_check_failure =
  1. | NONE
  2. | ALL_OLD
type update_item_input = {
  1. return_values_on_condition_check_failure : return_values_on_condition_check_failure option;
    (*

    An optional parameter that returns the item attributes for an UpdateItem operation that failed a condition check.

    There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.

    *)
  2. expression_attribute_values : (string * attribute_value) list option;
    (*

    One or more values that can be substituted in an expression.

    Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

    Available | Backordered | Discontinued

    You would first need to specify ExpressionAttributeValues as follows:

    { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} }

    You could then use these values in an expression, such as this:

    ProductStatus IN (:avail, :back, :disc)

    For more information on expression attribute values, see Condition Expressions in the Amazon DynamoDB Developer Guide.

    *)
  3. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

    • To access an attribute whose name conflicts with a DynamoDB reserved word.
    • To create a placeholder for repeating occurrences of an attribute name in an expression.
    • To prevent special characters in an attribute name from being misinterpreted in an expression.

    Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

    • Percentile

    The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide.) To work around this, you could specify the following for ExpressionAttributeNames:

    • {"#P":"Percentile"}

    You could then use this substitution in an expression, as in this example:

    • #P = :val

    Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

    For more information about expression attribute names, see Specifying Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  4. condition_expression : string option;
    (*

    A condition that must be satisfied in order for a conditional update to succeed.

    An expression can contain any of the following:

    • Functions: attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size

      These function names are case-sensitive.

    • Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
    • Logical operators: AND | OR | NOT

    For more information about condition expressions, see Specifying Conditions in the Amazon DynamoDB Developer Guide.

    *)
  5. update_expression : string option;
    (*

    An expression that defines one or more attributes to be updated, the action to be performed on them, and new values for them.

    The following action values are available for UpdateExpression.

    • SET - Adds one or more attributes and values to an item. If any of these attributes already exist, they are replaced by the new values. You can also use SET to add or subtract from an attribute that is of type Number. For example: SET myNum = myNum + :val

      SET supports the following functions:

      • if_not_exists (path, operand) - if the item does not contain an attribute at the specified path, then if_not_exists evaluates to operand; otherwise, it evaluates to path. You can use this function to avoid overwriting an attribute that may already be present in the item.
      • list_append (operand, operand) - evaluates to a list with a new element added to it. You can append the new element to the start or the end of the list by reversing the order of the operands.

      These function names are case-sensitive.

    • REMOVE - Removes one or more attributes from an item.
    • ADD - Adds the specified value to the item, if the attribute does not already exist. If the attribute does exist, then the behavior of ADD depends on the data type of the attribute:

      • If the existing attribute is a number, and if Value is also a number, then Value is mathematically added to the existing attribute. If Value is a negative number, then it is subtracted from the existing attribute.

        If you use ADD to increment or decrement a number value for an item that doesn't exist before the update, DynamoDB uses 0 as the initial value.

        Similarly, if you use ADD for an existing item to increment or decrement an attribute value that doesn't exist before the update, DynamoDB uses 0 as the initial value. For example, suppose that the item you want to update doesn't have an attribute named itemcount, but you decide to ADD the number 3 to this attribute anyway. DynamoDB will create the itemcount attribute, set its initial value to 0, and finally add 3 to it. The result will be a new itemcount attribute in the item, with a value of 3.

      • If the existing data type is a set and if Value is also a set, then Value is added to the existing set. For example, if the attribute value is the set [1,2], and the ADD action specified [3], then the final attribute value is [1,2,3]. An error occurs if an ADD action is specified for a set attribute and the attribute type specified does not match the existing set type.

        Both sets must have the same primitive data type. For example, if the existing data type is a set of strings, the Value must also be a set of strings.

      The ADD action only supports Number and set data types. In addition, ADD can only be used on top-level attributes, not nested attributes.

    • DELETE - Deletes an element from a set.

      If a set of values is specified, then those values are subtracted from the old set. For example, if the attribute value was the set [a,b,c] and the DELETE action specifies [a,c], then the final attribute value is [b]. Specifying an empty set is an error.

      The DELETE action only supports set data types. In addition, DELETE can only be used on top-level attributes, not nested attributes.

    You can have many actions in a single expression, such as the following: SET a=:value1, b=:value2 DELETE :value3, :value4, :value5

    For more information on update expressions, see Modifying Items and Attributes in the Amazon DynamoDB Developer Guide.

    *)
  6. return_item_collection_metrics : return_item_collection_metrics option;
    (*

    Determines whether item collection metrics are returned. If set to SIZE, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE (the default), no statistics are returned.

    *)
  7. return_consumed_capacity : return_consumed_capacity option;
  8. return_values : return_value option;
    (*

    Use ReturnValues if you want to get the item attributes as they appear before or after they are successfully updated. For UpdateItem, the valid values are:

    • NONE - If ReturnValues is not specified, or if its value is NONE, then nothing is returned. (This setting is the default for ReturnValues.)
    • ALL_OLD - Returns all of the attributes of the item, as they appeared before the UpdateItem operation.
    • UPDATED_OLD - Returns only the updated attributes, as they appeared before the UpdateItem operation.
    • ALL_NEW - Returns all of the attributes of the item, as they appear after the UpdateItem operation.
    • UPDATED_NEW - Returns only the updated attributes, as they appear after the UpdateItem operation.

    There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.

    The values returned are strongly consistent.

    *)
  9. conditional_operator : conditional_operator option;
    (*

    This is a legacy parameter. Use ConditionExpression instead. For more information, see ConditionalOperator in the Amazon DynamoDB Developer Guide.

    *)
  10. expected : (string * expected_attribute_value) list option;
    (*

    This is a legacy parameter. Use ConditionExpression instead. For more information, see Expected in the Amazon DynamoDB Developer Guide.

    *)
  11. attribute_updates : (string * attribute_value_update) list option;
    (*

    This is a legacy parameter. Use UpdateExpression instead. For more information, see AttributeUpdates in the Amazon DynamoDB Developer Guide.

    *)
  12. key : (string * attribute_value) list;
    (*

    The primary key of the item to be updated. Each element consists of an attribute name and a value for that attribute.

    For the primary key, you must provide all of the attributes. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide values for both the partition key and the sort key.

    *)
  13. table_name : string;
    (*

    The name of the table containing the item to update. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}

Represents the input of an UpdateItem operation.

type transaction_conflict_exception = {
  1. message : string option;
}

Operation was rejected because there is an ongoing transaction for the item.

type request_limit_exceeded = {
  1. message : string option;
}

Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.

type provisioned_throughput_exceeded_exception = {
  1. message : string option;
    (*

    You exceeded your maximum allowed provisioned throughput.

    *)
}

Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests that receive this exception. Your request is eventually successful, unless your retry queue is too large to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.

type item_collection_size_limit_exceeded_exception = {
  1. message : string option;
    (*

    The total size of an item collection has exceeded the maximum limit of 10 gigabytes.

    *)
}

An item collection is too large. This exception is only returned for tables that have one or more local secondary indexes.

type conditional_check_failed_exception = {
  1. item : (string * attribute_value) list option;
    (*

    Item which caused the ConditionalCheckFailedException.

    *)
  2. message : string option;
    (*

    The conditional request failed.

    *)
}

A condition specified in the operation could not be evaluated.

type replica_global_secondary_index_settings_description = {
  1. provisioned_write_capacity_auto_scaling_settings : auto_scaling_settings_description option;
    (*

    Auto scaling settings for a global secondary index replica's write capacity units.

    *)
  2. provisioned_write_capacity_units : int option;
    (*

    The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException.

    *)
  3. provisioned_read_capacity_auto_scaling_settings : auto_scaling_settings_description option;
    (*

    Auto scaling settings for a global secondary index replica's read capacity units.

    *)
  4. provisioned_read_capacity_units : int option;
    (*

    The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException.

    *)
  5. index_status : index_status option;
    (*

    The current status of the global secondary index:

    • CREATING - The global secondary index is being created.
    • UPDATING - The global secondary index is being updated.
    • DELETING - The global secondary index is being deleted.
    • ACTIVE - The global secondary index is ready for use.
    *)
  6. index_name : string;
    (*

    The name of the global secondary index. The name must be unique among all other indexes on this table.

    *)
}

Represents the properties of a global secondary index.

type replica_settings_description = {
  1. replica_table_class_summary : table_class_summary option;
  2. replica_global_secondary_index_settings : replica_global_secondary_index_settings_description list option;
    (*

    Replica global secondary index settings for the global table.

    *)
  3. replica_provisioned_write_capacity_auto_scaling_settings : auto_scaling_settings_description option;
    (*

    Auto scaling settings for a global table replica's write capacity units.

    *)
  4. replica_provisioned_write_capacity_units : int option;
    (*

    The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide.

    *)
  5. replica_provisioned_read_capacity_auto_scaling_settings : auto_scaling_settings_description option;
    (*

    Auto scaling settings for a global table replica's read capacity units.

    *)
  6. replica_provisioned_read_capacity_units : int option;
    (*

    The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide.

    *)
  7. replica_billing_mode_summary : billing_mode_summary option;
    (*

    The read/write capacity mode of the replica.

    *)
  8. replica_status : replica_status option;
    (*

    The current state of the Region:

    • CREATING - The Region is being created.
    • UPDATING - The Region is being updated.
    • DELETING - The Region is being deleted.
    • ACTIVE - The Region is ready for use.
    *)
  9. region_name : string;
    (*

    The Region name of the replica.

    *)
}

Represents the properties of a replica.

type update_global_table_settings_output = {
  1. replica_settings : replica_settings_description list option;
    (*

    The Region-specific settings for the global table.

    *)
  2. global_table_name : string option;
    (*

    The name of the global table.

    *)
}
type global_table_global_secondary_index_settings_update = {
  1. provisioned_write_capacity_auto_scaling_settings_update : auto_scaling_settings_update option;
    (*

    Auto scaling settings for managing a global secondary index's write capacity units.

    *)
  2. provisioned_write_capacity_units : int option;
    (*

    The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException.

    *)
  3. index_name : string;
    (*

    The name of the global secondary index. The name must be unique among all other indexes on this table.

    *)
}

Represents the settings of a global secondary index for a global table that will be modified.

type replica_global_secondary_index_settings_update = {
  1. provisioned_read_capacity_auto_scaling_settings_update : auto_scaling_settings_update option;
    (*

    Auto scaling settings for managing a global secondary index replica's read capacity units.

    *)
  2. provisioned_read_capacity_units : int option;
    (*

    The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException.

    *)
  3. index_name : string;
    (*

    The name of the global secondary index. The name must be unique among all other indexes on this table.

    *)
}

Represents the settings of a global secondary index for a global table that will be modified.

type replica_settings_update = {
  1. replica_table_class : table_class option;
    (*

    Replica-specific table class. If not specified, uses the source table's table class.

    *)
  2. replica_global_secondary_index_settings_update : replica_global_secondary_index_settings_update list option;
    (*

    Represents the settings of a global secondary index for a global table that will be modified.

    *)
  3. replica_provisioned_read_capacity_auto_scaling_settings_update : auto_scaling_settings_update option;
    (*

    Auto scaling settings for managing a global table replica's read capacity units.

    *)
  4. replica_provisioned_read_capacity_units : int option;
    (*

    The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide.

    *)
  5. region_name : string;
    (*

    The Region of the replica to be added.

    *)
}

Represents the settings for a global table in a Region that will be modified.

type update_global_table_settings_input = {
  1. replica_settings_update : replica_settings_update list option;
    (*

    Represents the settings for a global table in a Region that will be modified.

    *)
  2. global_table_global_secondary_index_settings_update : global_table_global_secondary_index_settings_update list option;
    (*

    Represents the settings of a global secondary index for a global table that will be modified.

    *)
  3. global_table_provisioned_write_capacity_auto_scaling_settings_update : auto_scaling_settings_update option;
    (*

    Auto scaling settings for managing provisioned write capacity for the global table.

    *)
  4. global_table_provisioned_write_capacity_units : int option;
    (*

    The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException.

    *)
  5. global_table_billing_mode : billing_mode option;
    (*

    The billing mode of the global table. If GlobalTableBillingMode is not specified, the global table defaults to PROVISIONED capacity billing mode.

    • PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned capacity mode.
    • PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-demand capacity mode.
    *)
  6. global_table_name : string;
    (*

    The name of the global table

    *)
}
type replica_not_found_exception = {
  1. message : string option;
}

The specified replica is no longer part of the global table.

type index_not_found_exception = {
  1. message : string option;
}

The operation tried to access a nonexistent index.

type global_table_not_found_exception = {
  1. message : string option;
}

The specified global table does not exist.

type global_table_status =
  1. | UPDATING
  2. | DELETING
  3. | ACTIVE
  4. | CREATING
type global_table_description = {
  1. global_table_name : string option;
    (*

    The global table name.

    *)
  2. global_table_status : global_table_status option;
    (*

    The current state of the global table:

    • CREATING - The global table is being created.
    • UPDATING - The global table is being updated.
    • DELETING - The global table is being deleted.
    • ACTIVE - The global table is ready for use.
    *)
  3. creation_date_time : float option;
    (*

    The creation time of the global table.

    *)
  4. global_table_arn : string option;
    (*

    The unique identifier of the global table.

    *)
  5. replication_group : replica_description list option;
    (*

    The Regions where the global table has replicas.

    *)
}

Contains details about the global table.

type update_global_table_output = {
  1. global_table_description : global_table_description option;
    (*

    Contains the details of the global table.

    *)
}
type create_replica_action = {
  1. region_name : string;
    (*

    The Region of the replica to be added.

    *)
}

Represents a replica to be added.

type delete_replica_action = {
  1. region_name : string;
    (*

    The Region of the replica to be removed.

    *)
}

Represents a replica to be removed.

type replica_update = {
  1. delete : delete_replica_action option;
    (*

    The name of the existing replica to be removed.

    *)
  2. create : create_replica_action option;
    (*

    The parameters required for creating a replica on an existing global table.

    *)
}

Represents one of the following:

  • A new replica to be added to an existing global table.
  • New parameters for an existing replica.
  • An existing replica to be removed from an existing global table.
type update_global_table_input = {
  1. replica_updates : replica_update list;
    (*

    A list of Regions that should be added or removed from the global table.

    *)
  2. global_table_name : string;
    (*

    The global table name.

    *)
}
type table_not_found_exception = {
  1. message : string option;
}

A source table with the name TableName does not currently exist within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.

type replica_already_exists_exception = {
  1. message : string option;
}

The specified replica is already part of the global table.

type contributor_insights_status =
  1. | FAILED
  2. | DISABLED
  3. | DISABLING
  4. | ENABLED
  5. | ENABLING
type update_contributor_insights_output = {
  1. contributor_insights_status : contributor_insights_status option;
    (*

    The status of contributor insights

    *)
  2. index_name : string option;
    (*

    The name of the global secondary index, if applicable.

    *)
  3. table_name : string option;
    (*

    The name of the table.

    *)
}
type contributor_insights_action =
  1. | DISABLE
  2. | ENABLE
type update_contributor_insights_input = {
  1. contributor_insights_action : contributor_insights_action;
    (*

    Represents the contributor insights action.

    *)
  2. index_name : string option;
    (*

    The global secondary index name, if applicable.

    *)
  3. table_name : string;
    (*

    The name of the table. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type continuous_backups_status =
  1. | DISABLED
  2. | ENABLED
type point_in_time_recovery_status =
  1. | DISABLED
  2. | ENABLED
type point_in_time_recovery_description = {
  1. latest_restorable_date_time : float option;
    (*

    LatestRestorableDateTime is typically 5 minutes before the current time.

    *)
  2. earliest_restorable_date_time : float option;
    (*

    Specifies the earliest point in time you can restore your table to. You can restore your table to any point in time during the last 35 days.

    *)
  3. point_in_time_recovery_status : point_in_time_recovery_status option;
    (*

    The current state of point in time recovery:

    • ENABLED - Point in time recovery is enabled.
    • DISABLED - Point in time recovery is disabled.
    *)
}

The description of the point in time settings applied to the table.

type continuous_backups_description = {
  1. point_in_time_recovery_description : point_in_time_recovery_description option;
    (*

    The description of the point in time recovery settings applied to the table.

    *)
  2. continuous_backups_status : continuous_backups_status;
    (*

    ContinuousBackupsStatus can be one of the following states: ENABLED, DISABLED

    *)
}

Represents the continuous backups and point in time recovery settings on the table.

type update_continuous_backups_output = {
  1. continuous_backups_description : continuous_backups_description option;
    (*

    Represents the continuous backups and point in time recovery settings on the table.

    *)
}
type point_in_time_recovery_specification = {
  1. point_in_time_recovery_enabled : bool;
    (*

    Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.

    *)
}

Represents the settings used to enable point in time recovery.

type update_continuous_backups_input = {
  1. point_in_time_recovery_specification : point_in_time_recovery_specification;
    (*

    Represents the settings used to enable point in time recovery.

    *)
  2. table_name : string;
    (*

    The name of the table. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type continuous_backups_unavailable_exception = {
  1. message : string option;
}

Backups have not yet been enabled for this table.

type update = {
  1. return_values_on_condition_check_failure : return_values_on_condition_check_failure option;
    (*

    Use ReturnValuesOnConditionCheckFailure to get the item attributes if the Update condition fails. For ReturnValuesOnConditionCheckFailure, the valid values are: NONE and ALL_OLD.

    *)
  2. expression_attribute_values : (string * attribute_value) list option;
    (*

    One or more values that can be substituted in an expression.

    *)
  3. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression.

    *)
  4. condition_expression : string option;
    (*

    A condition that must be satisfied in order for a conditional update to succeed.

    *)
  5. table_name : string;
    (*

    Name of the table for the UpdateItem request. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
  6. update_expression : string;
    (*

    An expression that defines one or more attributes to be updated, the action to be performed on them, and new value(s) for them.

    *)
  7. key : (string * attribute_value) list;
    (*

    The primary key of the item to be updated. Each element consists of an attribute name and a value for that attribute.

    *)
}

Represents a request to perform an UpdateItem operation.

type untag_resource_input = {
  1. tag_keys : string list;
    (*

    A list of tag keys. Existing tags of the resource whose keys are members of this list will be removed from the DynamoDB resource.

    *)
  2. resource_arn : string;
    (*

    The DynamoDB resource that the tags will be removed from. This value is an Amazon Resource Name (ARN).

    *)
}
type transaction_in_progress_exception = {
  1. message : string option;
}

The transaction with the given request token is already in progress.

Recommended Settings

This is a general recommendation for handling the TransactionInProgressException. These settings help ensure that the client retries will trigger completion of the ongoing TransactWriteItems request.

  • Set clientExecutionTimeout to a value that allows at least one retry to be processed after 5 seconds have elapsed since the first attempt for the TransactWriteItems operation.
  • Set socketTimeout to a value a little lower than the requestTimeout setting.
  • requestTimeout should be set based on the time taken for the individual retries of a single HTTP request for your use case, but setting it to 1 second or higher should work well to reduce chances of retries and TransactionInProgressException errors.
  • Use exponential backoff when retrying and tune backoff if needed.

Assuming default retry policy, example timeout settings based on the guidelines above are as follows:

Example timeline:

  • 0-1000 first attempt
  • 1000-1500 first sleep/delay (default retry policy uses 500 ms as base delay for 4xx errors)
  • 1500-2500 second attempt
  • 2500-3500 second sleep/delay (500 * 2, exponential backoff)
  • 3500-4500 third attempt
  • 4500-6500 third sleep/delay (500 * 2^2)
  • 6500-7500 fourth attempt (this can trigger inline recovery since 5 seconds have elapsed since the first attempt reached TC)
type cancellation_reason = {
  1. message : string option;
    (*

    Cancellation reason message description.

    *)
  2. code : string option;
    (*

    Status code for the result of the cancelled transaction.

    *)
  3. item : (string * attribute_value) list option;
    (*

    Item in the request which caused the transaction to get cancelled.

    *)
}

An ordered list of errors for each item in the request which caused the transaction to get cancelled. The values of the list are ordered according to the ordering of the TransactWriteItems request parameter. If no error occurred for the associated item an error with a Null code and Null message will be present.

type transaction_canceled_exception = {
  1. cancellation_reasons : cancellation_reason list option;
    (*

    A list of cancellation reasons.

    *)
  2. message : string option;
}

The entire transaction request was canceled.

DynamoDB cancels a TransactWriteItems request under the following circumstances:

  • A condition in one of the condition expressions is not met.
  • A table in the TransactWriteItems request is in a different account or region.
  • More than one action in the TransactWriteItems operation targets the same item.
  • There is insufficient provisioned capacity for the transaction to be completed.
  • An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
  • There is a user error, such as an invalid data format.
  • There is an ongoing TransactWriteItems operation that conflicts with a concurrent TransactWriteItems request. In this case the TransactWriteItems operation fails with a TransactionCanceledException.

DynamoDB cancels a TransactGetItems request under the following circumstances:

  • There is an ongoing TransactGetItems operation that conflicts with a concurrent PutItem, UpdateItem, DeleteItem or TransactWriteItems request. In this case the TransactGetItems operation fails with a TransactionCanceledException.
  • A table in the TransactGetItems request is in a different account or region.
  • There is insufficient provisioned capacity for the transaction to be completed.
  • There is a user error, such as an invalid data format.

If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons property. This property is not set for other languages. Transaction cancellation reasons are ordered in the order of requested items, if an item has no error it will have None code and Null message.

Cancellation reason codes and possible error messages:

  • No Errors:

    • Code: None
    • Message: null
  • Conditional Check Failed:

    • Code: ConditionalCheckFailed
    • Message: The conditional request failed.
  • Item Collection Size Limit Exceeded:

    • Code: ItemCollectionSizeLimitExceeded
    • Message: Collection size exceeded.
  • Transaction Conflict:

    • Code: TransactionConflict
    • Message: Transaction is ongoing for the item.
  • Provisioned Throughput Exceeded:

    • Code: ProvisionedThroughputExceeded
    • Messages:

      • The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.

        This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table.

      • The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API.

        This message is returned when provisioned throughput is exceeded is on a provisioned GSI.

  • Throttling Error:

    • Code: ThrottlingError
    • Messages:

      • Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your table or index so please try again shortly. If exceptions persist, check if you have a hot key: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html.

        This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically scaling the table.

      • Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is automatically scaling your index so please try again shortly.

        This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically scaling the GSI.

  • Validation Error:

    • Code: ValidationError
    • Messages:

      • One or more parameter values were invalid.
      • The update expression attempted to update the secondary index key beyond allowed size limits.
      • The update expression attempted to update the secondary index key to unsupported type.
      • An operand in the update expression has an incorrect data type.
      • Item size to update has exceeded the maximum allowed size.
      • Number overflow. Attempting to store a number with magnitude larger than supported range.
      • Type mismatch for attribute to update.
      • Nesting Levels have exceeded supported limits.
      • The document path provided in the update expression is invalid for update.
      • The provided expression refers to an attribute that does not exist in the item.
type transact_write_items_output = {
  1. item_collection_metrics : (string * item_collection_metrics list) list option;
    (*

    A list of tables that were processed by TransactWriteItems and, for each table, information about any item collections that were affected by individual UpdateItem, PutItem, or DeleteItem operations.

    *)
  2. consumed_capacity : consumed_capacity list option;
    (*

    The capacity units consumed by the entire TransactWriteItems operation. The values of the list are ordered according to the ordering of the TransactItems request parameter.

    *)
}
type condition_check = {
  1. return_values_on_condition_check_failure : return_values_on_condition_check_failure option;
    (*

    Use ReturnValuesOnConditionCheckFailure to get the item attributes if the ConditionCheck condition fails. For ReturnValuesOnConditionCheckFailure, the valid values are: NONE and ALL_OLD.

    *)
  2. expression_attribute_values : (string * attribute_value) list option;
    (*

    One or more values that can be substituted in an expression. For more information, see Condition expressions in the Amazon DynamoDB Developer Guide.

    *)
  3. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression. For more information, see Expression attribute names in the Amazon DynamoDB Developer Guide.

    *)
  4. condition_expression : string;
    (*

    A condition that must be satisfied in order for a conditional update to succeed. For more information, see Condition expressions in the Amazon DynamoDB Developer Guide.

    *)
  5. table_name : string;
    (*

    Name of the table for the check item request. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
  6. key : (string * attribute_value) list;
    (*

    The primary key of the item to be checked. Each element consists of an attribute name and a value for that attribute.

    *)
}

Represents a request to perform a check that an item exists or to check the condition of specific attributes of the item.

type put = {
  1. return_values_on_condition_check_failure : return_values_on_condition_check_failure option;
    (*

    Use ReturnValuesOnConditionCheckFailure to get the item attributes if the Put condition fails. For ReturnValuesOnConditionCheckFailure, the valid values are: NONE and ALL_OLD.

    *)
  2. expression_attribute_values : (string * attribute_value) list option;
    (*

    One or more values that can be substituted in an expression.

    *)
  3. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression.

    *)
  4. condition_expression : string option;
    (*

    A condition that must be satisfied in order for a conditional update to succeed.

    *)
  5. table_name : string;
    (*

    Name of the table in which to write the item. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
  6. item : (string * attribute_value) list;
    (*

    A map of attribute name to attribute values, representing the primary key of the item to be written by PutItem. All of the table's primary key attributes must be specified, and their data types must match those of the table's key schema. If any attributes are present in the item that are part of an index key schema for the table, their types must match the index key schema.

    *)
}

Represents a request to perform a PutItem operation.

type delete = {
  1. return_values_on_condition_check_failure : return_values_on_condition_check_failure option;
    (*

    Use ReturnValuesOnConditionCheckFailure to get the item attributes if the Delete condition fails. For ReturnValuesOnConditionCheckFailure, the valid values are: NONE and ALL_OLD.

    *)
  2. expression_attribute_values : (string * attribute_value) list option;
    (*

    One or more values that can be substituted in an expression.

    *)
  3. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression.

    *)
  4. condition_expression : string option;
    (*

    A condition that must be satisfied in order for a conditional delete to succeed.

    *)
  5. table_name : string;
    (*

    Name of the table in which the item to be deleted resides. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
  6. key : (string * attribute_value) list;
    (*

    The primary key of the item to be deleted. Each element consists of an attribute name and a value for that attribute.

    *)
}

Represents a request to perform a DeleteItem operation.

type transact_write_item = {
  1. update : update option;
    (*

    A request to perform an UpdateItem operation.

    *)
  2. delete : delete option;
    (*

    A request to perform a DeleteItem operation.

    *)
  3. put : put option;
    (*

    A request to perform a PutItem operation.

    *)
  4. condition_check : condition_check option;
    (*

    A request to perform a check item operation.

    *)
}

A list of requests that can perform update, put, delete, or check operations on multiple items in one or more tables atomically.

type transact_write_items_input = {
  1. client_request_token : string option;
    (*

    Providing a ClientRequestToken makes the call to TransactWriteItems idempotent, meaning that multiple identical calls have the same effect as one single call.

    Although multiple identical calls using the same client request token produce the same result on the server (no side effects), the responses to the calls might not be the same. If the ReturnConsumedCapacity parameter is set, then the initial TransactWriteItems call returns the amount of write capacity units consumed in making the changes. Subsequent TransactWriteItems calls with the same client token return the number of read capacity units consumed in reading the item.

    A client request token is valid for 10 minutes after the first request that uses it is completed. After 10 minutes, any request with the same client token is treated as a new request. Do not resubmit the same request with the same client token for more than 10 minutes, or the result might not be idempotent.

    If you submit a request with the same client token but a change in other parameters within the 10-minute idempotency window, DynamoDB returns an IdempotentParameterMismatch exception.

    *)
  2. return_item_collection_metrics : return_item_collection_metrics option;
    (*

    Determines whether item collection metrics are returned. If set to SIZE, the response includes statistics about item collections (if any), that were modified during the operation and are returned in the response. If set to NONE (the default), no statistics are returned.

    *)
  3. return_consumed_capacity : return_consumed_capacity option;
  4. transact_items : transact_write_item list;
    (*

    An ordered array of up to 100 TransactWriteItem objects, each of which contains a ConditionCheck, Put, Update, or Delete object. These can operate on items in different tables, but the tables must reside in the same Amazon Web Services account and Region, and no two of them can operate on the same item.

    *)
}
type idempotent_parameter_mismatch_exception = {
  1. message : string option;
}

DynamoDB rejected the request because you retried a request with a different payload but with an idempotent token that was already used.

type item_response = {
  1. item : (string * attribute_value) list option;
    (*

    Map of attribute data consisting of the data type and attribute value.

    *)
}

Details for the requested item.

type transact_get_items_output = {
  1. responses : item_response list option;
    (*

    An ordered array of up to 100 ItemResponse objects, each of which corresponds to the TransactGetItem object in the same position in the TransactItems array. Each ItemResponse object contains a Map of the name-value pairs that are the projected attributes of the requested item.

    If a requested item could not be retrieved, the corresponding ItemResponse object is Null, or if the requested item has no projected attributes, the corresponding ItemResponse object is an empty Map.

    *)
  2. consumed_capacity : consumed_capacity list option;
    (*

    If the ReturnConsumedCapacity value was TOTAL, this is an array of ConsumedCapacity objects, one for each table addressed by TransactGetItem objects in the TransactItems parameter. These ConsumedCapacity objects report the read-capacity units consumed by the TransactGetItems call in that table.

    *)
}
type get = {
  1. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in the ProjectionExpression parameter.

    *)
  2. projection_expression : string option;
    (*

    A string that identifies one or more attributes of the specified item to retrieve from the table. The attributes in the expression must be separated by commas. If no attribute names are specified, then all attributes of the specified item are returned. If any of the requested attributes are not found, they do not appear in the result.

    *)
  3. table_name : string;
    (*

    The name of the table from which to retrieve the specified item. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
  4. key : (string * attribute_value) list;
    (*

    A map of attribute names to AttributeValue objects that specifies the primary key of the item to retrieve.

    *)
}

Specifies an item and related attribute values to retrieve in a TransactGetItem object.

type transact_get_item = {
  1. get : get;
    (*

    Contains the primary key that identifies the item to get, together with the name of the table that contains the item, and optionally the specific attributes of the item to retrieve.

    *)
}

Specifies an item to be retrieved as part of the transaction.

type transact_get_items_input = {
  1. return_consumed_capacity : return_consumed_capacity option;
    (*

    A value of TOTAL causes consumed capacity information to be returned, and a value of NONE prevents that information from being returned. No other value is valid.

    *)
  2. transact_items : transact_get_item list;
    (*

    An ordered array of up to 100 TransactGetItem objects, each of which contains a Get structure.

    *)
}
type time_to_live_status =
  1. | DISABLED
  2. | ENABLED
  3. | DISABLING
  4. | ENABLING
type time_to_live_description = {
  1. attribute_name : string option;
    (*

    The name of the TTL attribute for items in the table.

    *)
  2. time_to_live_status : time_to_live_status option;
    (*

    The TTL status for the table.

    *)
}

The description of the Time to Live (TTL) status on the specified table.

type tag = {
  1. value : string;
    (*

    The value of the tag. Tag values are case-sensitive and can be null.

    *)
  2. key : string;
    (*

    The key of the tag. Tag keys are case sensitive. Each DynamoDB table can only have up to one tag with the same key. If you try to add an existing tag (same key), the existing tag value will be updated to the new value.

    *)
}

Describes a tag. A tag is a key-value pair. You can add up to 50 tags to a single DynamoDB table.

Amazon Web Services-assigned tag names and values are automatically assigned the aws: prefix, which the user cannot assign. Amazon Web Services-assigned tag names do not count towards the tag limit of 50. User-assigned tag names have the prefix user: in the Cost Allocation Report. You cannot backdate the application of a tag.

For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.

type tag_resource_input = {
  1. tags : tag list;
    (*

    The tags to be assigned to the Amazon DynamoDB resource.

    *)
  2. resource_arn : string;
    (*

    Identifies the Amazon DynamoDB resource to which tags should be added. This value is an Amazon Resource Name (ARN).

    *)
}
type table_in_use_exception = {
  1. message : string option;
}

A target table with the specified name is either being created or deleted.

type global_secondary_index = {
  1. on_demand_throughput : on_demand_throughput option;
    (*

    The maximum number of read and write units for the specified global secondary index. If you use this parameter, you must specify MaxReadRequestUnits, MaxWriteRequestUnits, or both.

    *)
  2. provisioned_throughput : provisioned_throughput option;
    (*

    Represents the provisioned throughput settings for the specified global secondary index.

    For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.

    *)
  3. projection : projection;
    (*

    Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.

    *)
  4. key_schema : key_schema_element list;
    (*

    The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types:

    • HASH - partition key
    • RANGE - sort key

    The partition key of an item is also known as its hash attribute. The term "hash attribute" derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.

    The sort key of an item is also known as its range attribute. The term "range attribute" derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

    *)
  5. index_name : string;
    (*

    The name of the global secondary index. The name must be unique among all other indexes on this table.

    *)
}

Represents the properties of a global secondary index.

type table_creation_parameters = {
  1. global_secondary_indexes : global_secondary_index list option;
    (*

    The Global Secondary Indexes (GSI) of the table to be created as part of the import operation.

    *)
  2. sse_specification : sse_specification option;
  3. on_demand_throughput : on_demand_throughput option;
  4. provisioned_throughput : provisioned_throughput option;
  5. billing_mode : billing_mode option;
    (*

    The billing mode for provisioning the table created as part of the import operation.

    *)
  6. key_schema : key_schema_element list;
    (*

    The primary key and option sort key of the table created as part of the import operation.

    *)
  7. attribute_definitions : attribute_definition list;
    (*

    The attributes of the table created as part of the import operation.

    *)
  8. table_name : string;
    (*

    The name of the table created as part of the import operation.

    *)
}

The parameters for the table created as part of the import operation.

type table_already_exists_exception = {
  1. message : string option;
}

A target table with the specified name already exists.

type local_secondary_index_info = {
  1. projection : projection option;
    (*

    Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.

    *)
  2. key_schema : key_schema_element list option;
    (*

    The complete key schema for a local secondary index, which consists of one or more pairs of attribute names and key types:

    • HASH - partition key
    • RANGE - sort key

    The partition key of an item is also known as its hash attribute. The term "hash attribute" derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.

    The sort key of an item is also known as its range attribute. The term "range attribute" derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

    *)
  3. index_name : string option;
    (*

    Represents the name of the local secondary index.

    *)
}

Represents the properties of a local secondary index for the table when the backup was created.

type global_secondary_index_info = {
  1. on_demand_throughput : on_demand_throughput option;
  2. provisioned_throughput : provisioned_throughput option;
    (*

    Represents the provisioned throughput settings for the specified global secondary index.

    *)
  3. projection : projection option;
    (*

    Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.

    *)
  4. key_schema : key_schema_element list option;
    (*

    The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types:

    • HASH - partition key
    • RANGE - sort key

    The partition key of an item is also known as its hash attribute. The term "hash attribute" derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.

    The sort key of an item is also known as its range attribute. The term "range attribute" derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

    *)
  5. index_name : string option;
    (*

    The name of the global secondary index.

    *)
}

Represents the properties of a global secondary index for the table when the backup was created.

type source_table_feature_details = {
  1. sse_description : sse_description option;
    (*

    The description of the server-side encryption status on the table when the backup was created.

    *)
  2. time_to_live_description : time_to_live_description option;
    (*

    Time to Live settings on the table when the backup was created.

    *)
  3. stream_description : stream_specification option;
    (*

    Stream settings on the table when the backup was created.

    *)
  4. global_secondary_indexes : global_secondary_index_info list option;
    (*

    Represents the GSI properties for the table when the backup was created. It includes the IndexName, KeySchema, Projection, and ProvisionedThroughput for the GSIs on the table at the time of backup.

    *)
  5. local_secondary_indexes : local_secondary_index_info list option;
    (*

    Represents the LSI properties for the table when the backup was created. It includes the IndexName, KeySchema and Projection for the LSIs on the table at the time of backup.

    *)
}

Contains the details of the features enabled on the table when the backup was created. For example, LSIs, GSIs, streams, TTL.

type source_table_details = {
  1. billing_mode : billing_mode option;
    (*

    Controls how you are charged for read and write throughput and how you manage capacity. This setting can be changed later.

    • PROVISIONED - Sets the read/write capacity mode to PROVISIONED. We recommend using PROVISIONED for predictable workloads.
    • PAY_PER_REQUEST - Sets the read/write capacity mode to PAY_PER_REQUEST. We recommend using PAY_PER_REQUEST for unpredictable workloads.
    *)
  2. item_count : int option;
    (*

    Number of items in the table. Note that this is an approximate value.

    *)
  3. on_demand_throughput : on_demand_throughput option;
  4. provisioned_throughput : provisioned_throughput;
    (*

    Read IOPs and Write IOPS on the table when the backup was created.

    *)
  5. table_creation_date_time : float;
    (*

    Time when the source table was created.

    *)
  6. key_schema : key_schema_element list;
    (*

    Schema of the table.

    *)
  7. table_size_bytes : int option;
    (*

    Size of the table in bytes. Note that this is an approximate value.

    *)
  8. table_arn : string option;
    (*

    ARN of the table for which backup was created.

    *)
  9. table_id : string;
    (*

    Unique identifier for the table for which the backup was created.

    *)
  10. table_name : string;
    (*

    The name of the table for which the backup was created.

    *)
}

Contains the details of the table when the backup was created.

type select =
  1. | COUNT
  2. | SPECIFIC_ATTRIBUTES
  3. | ALL_PROJECTED_ATTRIBUTES
  4. | ALL_ATTRIBUTES
type scan_output = {
  1. consumed_capacity : consumed_capacity option;
    (*

    The capacity units consumed by the Scan operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. ConsumedCapacity is only returned if the ReturnConsumedCapacity parameter was specified. For more information, see Capacity unit consumption for read operations in the Amazon DynamoDB Developer Guide.

    *)
  2. last_evaluated_key : (string * attribute_value) list option;
    (*

    The primary key of the item where the operation stopped, inclusive of the previous result set. Use this value to start a new operation, excluding this value in the new request.

    If LastEvaluatedKey is empty, then the "last page" of results has been processed and there is no more data to be retrieved.

    If LastEvaluatedKey is not empty, it does not necessarily mean that there is more data in the result set. The only way to know when you have reached the end of the result set is when LastEvaluatedKey is empty.

    *)
  3. scanned_count : int option;
    (*

    The number of items evaluated, before any ScanFilter is applied. A high ScannedCount value with few, or no, Count results indicates an inefficient Scan operation. For more information, see Count and ScannedCount in the Amazon DynamoDB Developer Guide.

    If you did not use a filter in the request, then ScannedCount is the same as Count.

    *)
  4. count : int option;
    (*

    The number of items in the response.

    If you set ScanFilter in the request, then Count is the number of items returned after the filter was applied, and ScannedCount is the number of matching items before the filter was applied.

    If you did not use a filter in the request, then Count is the same as ScannedCount.

    *)
  5. items : (string * attribute_value) list list option;
    (*

    An array of item attributes that match the scan criteria. Each element in this array consists of an attribute name and the value for that attribute.

    *)
}

Represents the output of a Scan operation.

type condition = {
  1. comparison_operator : comparison_operator;
    (*

    A comparator for evaluating attributes. For example, equals, greater than, less than, etc.

    The following comparison operators are available:

    EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | BEGINS_WITH | IN | BETWEEN

    The following are descriptions of each comparison operator.

    • EQ : Equal. EQ is supported for all data types, including lists and maps.

      AttributeValueList can contain only one AttributeValue element of type String, Number, Binary, String Set, Number Set, or Binary Set. If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not equal {"NS":["6", "2", "1"]}.

    • NE : Not equal. NE is supported for all data types, including lists and maps.

      AttributeValueList can contain only one AttributeValue of type String, Number, Binary, String Set, Number Set, or Binary Set. If an item contains an AttributeValue of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not equal {"NS":["6", "2", "1"]}.

    • LE : Less than or equal.

      AttributeValueList can contain only one AttributeValue element of type String, Number, or Binary (not a set type). If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.

    • LT : Less than.

      AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set type). If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.

    • GE : Greater than or equal.

      AttributeValueList can contain only one AttributeValue element of type String, Number, or Binary (not a set type). If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.

    • GT : Greater than.

      AttributeValueList can contain only one AttributeValue element of type String, Number, or Binary (not a set type). If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.

    • NOT_NULL : The attribute exists. NOT_NULL is supported for all data types, including lists and maps.

      This operator tests for the existence of an attribute, not its data type. If the data type of attribute "a" is null, and you evaluate it using NOT_NULL, the result is a Boolean true. This result is because the attribute "a" exists; its data type is not relevant to the NOT_NULL comparison operator.

    • NULL : The attribute does not exist. NULL is supported for all data types, including lists and maps.

      This operator tests for the nonexistence of an attribute, not its data type. If the data type of attribute "a" is null, and you evaluate it using NULL, the result is a Boolean false. This is because the attribute "a" exists; its data type is not relevant to the NULL comparison operator.

    • CONTAINS : Checks for a subsequence, or value in a set.

      AttributeValueList can contain only one AttributeValue element of type String, Number, or Binary (not a set type). If the target attribute of the comparison is of type String, then the operator checks for a substring match. If the target attribute of the comparison is of type Binary, then the operator looks for a subsequence of the target that matches the input. If the target attribute of the comparison is a set ("SS", "NS", or "BS"), then the operator evaluates to true if it finds an exact match with any member of the set.

      CONTAINS is supported for lists: When evaluating "a CONTAINS b", "a" can be a list; however, "b" cannot be a set, a map, or a list.

    • NOT_CONTAINS : Checks for absence of a subsequence, or absence of a value in a set.

      AttributeValueList can contain only one AttributeValue element of type String, Number, or Binary (not a set type). If the target attribute of the comparison is a String, then the operator checks for the absence of a substring match. If the target attribute of the comparison is Binary, then the operator checks for the absence of a subsequence of the target that matches the input. If the target attribute of the comparison is a set ("SS", "NS", or "BS"), then the operator evaluates to true if it does not find an exact match with any member of the set.

      NOT_CONTAINS is supported for lists: When evaluating "a NOT CONTAINS b", "a" can be a list; however, "b" cannot be a set, a map, or a list.

    • BEGINS_WITH : Checks for a prefix.

      AttributeValueList can contain only one AttributeValue of type String or Binary (not a Number or a set type). The target attribute of the comparison must be of type String or Binary (not a Number or a set type).

    • IN : Checks for matching elements in a list.

      AttributeValueList can contain one or more AttributeValue elements of type String, Number, or Binary. These attributes are compared against an existing attribute of an item. If any elements of the input are equal to the item attribute, the expression evaluates to true.

    • BETWEEN : Greater than or equal to the first value, and less than or equal to the second value.

      AttributeValueList must contain two AttributeValue elements of the same type, either String, Number, or Binary (not a set type). A target attribute matches if the target value is greater than, or equal to, the first element and less than, or equal to, the second element. If an item contains an AttributeValue element of a different type than the one provided in the request, the value does not match. For example, {"S":"6"} does not compare to {"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}

    For usage examples of AttributeValueList and ComparisonOperator, see Legacy Conditional Parameters in the Amazon DynamoDB Developer Guide.

    *)
  2. attribute_value_list : attribute_value list option;
    (*

    One or more values to evaluate against the supplied attribute. The number of values in the list depends on the ComparisonOperator being used.

    For type Number, value comparisons are numeric.

    String value comparisons for greater than, equals, or less than are based on ASCII character code values. For example, a is greater than A, and a is greater than B. For a list of code values, see http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters.

    For Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary values.

    *)
}

Represents the selection criteria for a Query or Scan operation:

  • For a Query operation, Condition is used for specifying the KeyConditions to use when querying a table or an index. For KeyConditions, only the following comparison operators are supported:

    EQ | LE | LT | GE | GT | BEGINS_WITH | BETWEEN

    Condition is also used in a QueryFilter, which evaluates the query results and returns only the desired values.

  • For a Scan operation, Condition is used in a ScanFilter, which evaluates the scan results and returns only the desired values.
type scan_input = {
  1. consistent_read : bool option;
    (*

    A Boolean value that determines the read consistency model during the scan:

    • If ConsistentRead is false, then the data returned from Scan might not contain the results from other recently completed write operations (PutItem, UpdateItem, or DeleteItem).
    • If ConsistentRead is true, then all of the write operations that completed before the Scan began are guaranteed to be contained in the Scan response.

    The default setting for ConsistentRead is false.

    The ConsistentRead parameter is not supported on global secondary indexes. If you scan a global secondary index with ConsistentRead set to true, you will receive a ValidationException.

    *)
  2. expression_attribute_values : (string * attribute_value) list option;
    (*

    One or more values that can be substituted in an expression.

    Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

    Available | Backordered | Discontinued

    You would first need to specify ExpressionAttributeValues as follows:

    { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} }

    You could then use these values in an expression, such as this:

    ProductStatus IN (:avail, :back, :disc)

    For more information on expression attribute values, see Condition Expressions in the Amazon DynamoDB Developer Guide.

    *)
  3. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

    • To access an attribute whose name conflicts with a DynamoDB reserved word.
    • To create a placeholder for repeating occurrences of an attribute name in an expression.
    • To prevent special characters in an attribute name from being misinterpreted in an expression.

    Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

    • Percentile

    The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

    • {"#P":"Percentile"}

    You could then use this substitution in an expression, as in this example:

    • #P = :val

    Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

    For more information on expression attribute names, see Specifying Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  4. filter_expression : string option;
    (*

    A string that contains conditions that DynamoDB applies after the Scan operation, but before the data is returned to you. Items that do not satisfy the FilterExpression criteria are not returned.

    A FilterExpression is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.

    For more information, see Filter Expressions in the Amazon DynamoDB Developer Guide.

    *)
  5. projection_expression : string option;
    (*

    A string that identifies one or more attributes to retrieve from the specified table or index. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.

    If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

    For more information, see Specifying Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  6. segment : int option;
    (*

    For a parallel Scan request, Segment identifies an individual segment to be scanned by an application worker.

    Segment IDs are zero-based, so the first segment is always 0. For example, if you want to use four application threads to scan a table or an index, then the first thread specifies a Segment value of 0, the second thread specifies 1, and so on.

    The value of LastEvaluatedKey returned from a parallel Scan request must be used as ExclusiveStartKey with the same segment ID in a subsequent Scan operation.

    The value for Segment must be greater than or equal to 0, and less than the value provided for TotalSegments.

    If you provide Segment, you must also provide TotalSegments.

    *)
  7. total_segments : int option;
    (*

    For a parallel Scan request, TotalSegments represents the total number of segments into which the Scan operation will be divided. The value of TotalSegments corresponds to the number of application workers that will perform the parallel scan. For example, if you want to use four application threads to scan a table or an index, specify a TotalSegments value of 4.

    The value for TotalSegments must be greater than or equal to 1, and less than or equal to 1000000. If you specify a TotalSegments value of 1, the Scan operation will be sequential rather than parallel.

    If you specify TotalSegments, you must also specify Segment.

    *)
  8. return_consumed_capacity : return_consumed_capacity option;
  9. exclusive_start_key : (string * attribute_value) list option;
    (*

    The primary key of the first item that this operation will evaluate. Use the value that was returned for LastEvaluatedKey in the previous operation.

    The data type for ExclusiveStartKey must be String, Number or Binary. No set data types are allowed.

    In a parallel scan, a Scan request that includes ExclusiveStartKey must specify the same segment whose previous Scan returned the corresponding value of LastEvaluatedKey.

    *)
  10. conditional_operator : conditional_operator option;
    (*

    This is a legacy parameter. Use FilterExpression instead. For more information, see ConditionalOperator in the Amazon DynamoDB Developer Guide.

    *)
  11. scan_filter : (string * condition) list option;
    (*

    This is a legacy parameter. Use FilterExpression instead. For more information, see ScanFilter in the Amazon DynamoDB Developer Guide.

    *)
  12. select : select option;
    (*

    The attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index.

    • ALL_ATTRIBUTES - Returns all of the item attributes from the specified table or index. If you query a local secondary index, then for each matching item in the index, DynamoDB fetches the entire item from the parent table. If the index is configured to project all item attributes, then all of the data can be obtained from the local secondary index, and no fetching is required.
    • ALL_PROJECTED_ATTRIBUTES - Allowed only when querying an index. Retrieves all attributes that have been projected into the index. If the index is configured to project all attributes, this return value is equivalent to specifying ALL_ATTRIBUTES.
    • COUNT - Returns the number of matching items, rather than the matching items themselves. Note that this uses the same quantity of read capacity units as getting the items, and is subject to the same item size calculations.
    • SPECIFIC_ATTRIBUTES - Returns only the attributes listed in ProjectionExpression. This return value is equivalent to specifying ProjectionExpression without specifying any value for Select.

      If you query or scan a local secondary index and request only attributes that are projected into that index, the operation reads only the index and not the table. If any of the requested attributes are not projected into the local secondary index, DynamoDB fetches each of these attributes from the parent table. This extra fetching incurs additional throughput cost and latency.

      If you query or scan a global secondary index, you can only request attributes that are projected into the index. Global secondary index queries cannot fetch attributes from the parent table.

    If neither Select nor ProjectionExpression are specified, DynamoDB defaults to ALL_ATTRIBUTES when accessing a table, and ALL_PROJECTED_ATTRIBUTES when accessing an index. You cannot use both Select and ProjectionExpression together in a single request, unless the value for Select is SPECIFIC_ATTRIBUTES. (This usage is equivalent to specifying ProjectionExpression without any value for Select.)

    If you use the ProjectionExpression parameter, then the value for Select can only be SPECIFIC_ATTRIBUTES. Any other value for Select will return an error.

    *)
  13. limit : int option;
    (*

    The maximum number of items to evaluate (not necessarily the number of matching items). If DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, and a key in LastEvaluatedKey to apply in a subsequent operation, so that you can pick up where you left off. Also, if the processed dataset size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a key in LastEvaluatedKey to apply in a subsequent operation to continue the operation. For more information, see Working with Queries in the Amazon DynamoDB Developer Guide.

    *)
  14. attributes_to_get : string list option;
    (*

    This is a legacy parameter. Use ProjectionExpression instead. For more information, see AttributesToGet in the Amazon DynamoDB Developer Guide.

    *)
  15. index_name : string option;
    (*

    The name of a secondary index to scan. This index can be any local secondary index or global secondary index. Note that if you use the IndexName parameter, you must also provide TableName.

    *)
  16. table_name : string;
    (*

    The name of the table containing the requested items or if you provide IndexName, the name of the table to which that index belongs.

    You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}

Represents the input of a Scan operation.

type s3_sse_algorithm =
  1. | KMS
  2. | AES256
type s3_bucket_source = {
  1. s3_key_prefix : string option;
    (*

    The key prefix shared by all S3 Objects that are being imported.

    *)
  2. s3_bucket : string;
    (*

    The S3 bucket that is being imported from.

    *)
  3. s3_bucket_owner : string option;
    (*

    The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.

    *)
}

The S3 bucket that is being imported from.

type restore_table_to_point_in_time_output = {
  1. table_description : table_description option;
    (*

    Represents the properties of a table.

    *)
}
type local_secondary_index = {
  1. projection : projection;
    (*

    Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.

    *)
  2. key_schema : key_schema_element list;
    (*

    The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types:

    • HASH - partition key
    • RANGE - sort key

    The partition key of an item is also known as its hash attribute. The term "hash attribute" derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.

    The sort key of an item is also known as its range attribute. The term "range attribute" derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

    *)
  3. index_name : string;
    (*

    The name of the local secondary index. The name must be unique among all other indexes on this table.

    *)
}

Represents the properties of a local secondary index.

type restore_table_to_point_in_time_input = {
  1. sse_specification_override : sse_specification option;
    (*

    The new server-side encryption settings for the restored table.

    *)
  2. on_demand_throughput_override : on_demand_throughput option;
  3. provisioned_throughput_override : provisioned_throughput option;
    (*

    Provisioned throughput settings for the restored table.

    *)
  4. local_secondary_index_override : local_secondary_index list option;
    (*

    List of local secondary indexes for the restored table. The indexes provided should match existing secondary indexes. You can choose to exclude some or all of the indexes at the time of restore.

    *)
  5. global_secondary_index_override : global_secondary_index list option;
    (*

    List of global secondary indexes for the restored table. The indexes provided should match existing secondary indexes. You can choose to exclude some or all of the indexes at the time of restore.

    *)
  6. billing_mode_override : billing_mode option;
    (*

    The billing mode of the restored table.

    *)
  7. restore_date_time : float option;
    (*

    Time in the past to restore the table to.

    *)
  8. use_latest_restorable_time : bool option;
    (*

    Restore the table to the latest possible time. LatestRestorableDateTime is typically 5 minutes before the current time.

    *)
  9. target_table_name : string;
    (*

    The name of the new table to which it must be restored to.

    *)
  10. source_table_name : string option;
    (*

    Name of the source table that is being restored.

    *)
  11. source_table_arn : string option;
    (*

    The DynamoDB table that will be restored. This value is an Amazon Resource Name (ARN).

    *)
}
type point_in_time_recovery_unavailable_exception = {
  1. message : string option;
}

Point in time recovery has not yet been enabled for this source table.

type invalid_restore_time_exception = {
  1. message : string option;
}

An invalid restore time was specified. RestoreDateTime must be between EarliestRestorableDateTime and LatestRestorableDateTime.

type restore_table_from_backup_output = {
  1. table_description : table_description option;
    (*

    The description of the table created from an existing backup.

    *)
}
type restore_table_from_backup_input = {
  1. sse_specification_override : sse_specification option;
    (*

    The new server-side encryption settings for the restored table.

    *)
  2. on_demand_throughput_override : on_demand_throughput option;
  3. provisioned_throughput_override : provisioned_throughput option;
    (*

    Provisioned throughput settings for the restored table.

    *)
  4. local_secondary_index_override : local_secondary_index list option;
    (*

    List of local secondary indexes for the restored table. The indexes provided should match existing secondary indexes. You can choose to exclude some or all of the indexes at the time of restore.

    *)
  5. global_secondary_index_override : global_secondary_index list option;
    (*

    List of global secondary indexes for the restored table. The indexes provided should match existing secondary indexes. You can choose to exclude some or all of the indexes at the time of restore.

    *)
  6. billing_mode_override : billing_mode option;
    (*

    The billing mode of the restored table.

    *)
  7. backup_arn : string;
    (*

    The Amazon Resource Name (ARN) associated with the backup.

    *)
  8. target_table_name : string;
    (*

    The name of the new table to which the backup must be restored.

    *)
}
type backup_not_found_exception = {
  1. message : string option;
}

Backup not found for the given BackupARN.

type backup_in_use_exception = {
  1. message : string option;
}

There is another ongoing conflicting backup control plane operation on the table. The backup is either being created, deleted or restored to a table.

type replica = {
  1. region_name : string option;
    (*

    The Region where the replica needs to be created.

    *)
}

Represents the properties of a replica.

type query_output = {
  1. consumed_capacity : consumed_capacity option;
    (*

    The capacity units consumed by the Query operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. ConsumedCapacity is only returned if the ReturnConsumedCapacity parameter was specified. For more information, see Capacity unit consumption for read operations in the Amazon DynamoDB Developer Guide.

    *)
  2. last_evaluated_key : (string * attribute_value) list option;
    (*

    The primary key of the item where the operation stopped, inclusive of the previous result set. Use this value to start a new operation, excluding this value in the new request.

    If LastEvaluatedKey is empty, then the "last page" of results has been processed and there is no more data to be retrieved.

    If LastEvaluatedKey is not empty, it does not necessarily mean that there is more data in the result set. The only way to know when you have reached the end of the result set is when LastEvaluatedKey is empty.

    *)
  3. scanned_count : int option;
    (*

    The number of items evaluated, before any QueryFilter is applied. A high ScannedCount value with few, or no, Count results indicates an inefficient Query operation. For more information, see Count and ScannedCount in the Amazon DynamoDB Developer Guide.

    If you did not use a filter in the request, then ScannedCount is the same as Count.

    *)
  4. count : int option;
    (*

    The number of items in the response.

    If you used a QueryFilter in the request, then Count is the number of items returned after the filter was applied, and ScannedCount is the number of matching items before the filter was applied.

    If you did not use a filter in the request, then Count and ScannedCount are the same.

    *)
  5. items : (string * attribute_value) list list option;
    (*

    An array of item attributes that match the query criteria. Each element in this array consists of an attribute name and the value for that attribute.

    *)
}

Represents the output of a Query operation.

type query_input = {
  1. expression_attribute_values : (string * attribute_value) list option;
    (*

    One or more values that can be substituted in an expression.

    Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

    Available | Backordered | Discontinued

    You would first need to specify ExpressionAttributeValues as follows:

    { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} }

    You could then use these values in an expression, such as this:

    ProductStatus IN (:avail, :back, :disc)

    For more information on expression attribute values, see Specifying Conditions in the Amazon DynamoDB Developer Guide.

    *)
  2. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

    • To access an attribute whose name conflicts with a DynamoDB reserved word.
    • To create a placeholder for repeating occurrences of an attribute name in an expression.
    • To prevent special characters in an attribute name from being misinterpreted in an expression.

    Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

    • Percentile

    The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

    • {"#P":"Percentile"}

    You could then use this substitution in an expression, as in this example:

    • #P = :val

    Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

    For more information on expression attribute names, see Specifying Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  3. key_condition_expression : string option;
    (*

    The condition that specifies the key values for items to be retrieved by the Query action.

    The condition must perform an equality test on a single partition key value.

    The condition can optionally perform one of several comparison tests on a single sort key value. This allows Query to retrieve one item with a given partition key value and sort key value, or several items that have the same partition key value but different sort key values.

    The partition key equality test is required, and must be specified in the following format:

    partitionKeyName = :partitionkeyval

    If you also want to provide a condition for the sort key, it must be combined using AND with the condition for the sort key. Following is an example, using the = comparison operator for the sort key:

    partitionKeyName = :partitionkeyval AND sortKeyName = :sortkeyval

    Valid comparisons for the sort key condition are as follows:

    • sortKeyName = :sortkeyval - true if the sort key value is equal to :sortkeyval.
    • sortKeyName < :sortkeyval - true if the sort key value is less than :sortkeyval.
    • sortKeyName <= :sortkeyval - true if the sort key value is less than or equal to :sortkeyval.
    • sortKeyName > :sortkeyval - true if the sort key value is greater than :sortkeyval.
    • sortKeyName >= :sortkeyval - true if the sort key value is greater than or equal to :sortkeyval.
    • sortKeyName BETWEEN :sortkeyval1 AND :sortkeyval2 - true if the sort key value is greater than or equal to :sortkeyval1, and less than or equal to :sortkeyval2.
    • begins_with ( sortKeyName, :sortkeyval ) - true if the sort key value begins with a particular operand. (You cannot use this function with a sort key that is of type Number.) Note that the function name begins_with is case-sensitive.

    Use the ExpressionAttributeValues parameter to replace tokens such as :partitionval and :sortval with actual values at runtime.

    You can optionally use the ExpressionAttributeNames parameter to replace the names of the partition key and sort key with placeholder tokens. This option might be necessary if an attribute name conflicts with a DynamoDB reserved word. For example, the following KeyConditionExpression parameter causes an error because Size is a reserved word:

    • Size = :myval

    To work around this, define a placeholder (such a #S) to represent the attribute name Size. KeyConditionExpression then is as follows:

    • #S = :myval

    For a list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide.

    For more information on ExpressionAttributeNames and ExpressionAttributeValues, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

    *)
  4. filter_expression : string option;
    (*

    A string that contains conditions that DynamoDB applies after the Query operation, but before the data is returned to you. Items that do not satisfy the FilterExpression criteria are not returned.

    A FilterExpression does not allow key attributes. You cannot define a filter expression based on a partition key or a sort key.

    A FilterExpression is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.

    For more information, see Filter Expressions in the Amazon DynamoDB Developer Guide.

    *)
  5. projection_expression : string option;
    (*

    A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.

    If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

    For more information, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  6. return_consumed_capacity : return_consumed_capacity option;
  7. exclusive_start_key : (string * attribute_value) list option;
    (*

    The primary key of the first item that this operation will evaluate. Use the value that was returned for LastEvaluatedKey in the previous operation.

    The data type for ExclusiveStartKey must be String, Number, or Binary. No set data types are allowed.

    *)
  8. scan_index_forward : bool option;
    (*

    Specifies the order for index traversal: If true (default), the traversal is performed in ascending order; if false, the traversal is performed in descending order.

    Items with the same partition key value are stored in sorted order by sort key. If the sort key data type is Number, the results are stored in numeric order. For type String, the results are stored in order of UTF-8 bytes. For type Binary, DynamoDB treats each byte of the binary data as unsigned.

    If ScanIndexForward is true, DynamoDB returns the results in the order in which they are stored (by sort key value). This is the default behavior. If ScanIndexForward is false, DynamoDB reads the results in reverse order by sort key value, and then returns the results to the client.

    *)
  9. conditional_operator : conditional_operator option;
    (*

    This is a legacy parameter. Use FilterExpression instead. For more information, see ConditionalOperator in the Amazon DynamoDB Developer Guide.

    *)
  10. query_filter : (string * condition) list option;
    (*

    This is a legacy parameter. Use FilterExpression instead. For more information, see QueryFilter in the Amazon DynamoDB Developer Guide.

    *)
  11. key_conditions : (string * condition) list option;
    (*

    This is a legacy parameter. Use KeyConditionExpression instead. For more information, see KeyConditions in the Amazon DynamoDB Developer Guide.

    *)
  12. consistent_read : bool option;
    (*

    Determines the read consistency model: If set to true, then the operation uses strongly consistent reads; otherwise, the operation uses eventually consistent reads.

    Strongly consistent reads are not supported on global secondary indexes. If you query a global secondary index with ConsistentRead set to true, you will receive a ValidationException.

    *)
  13. limit : int option;
    (*

    The maximum number of items to evaluate (not necessarily the number of matching items). If DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, and a key in LastEvaluatedKey to apply in a subsequent operation, so that you can pick up where you left off. Also, if the processed dataset size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a key in LastEvaluatedKey to apply in a subsequent operation to continue the operation. For more information, see Query and Scan in the Amazon DynamoDB Developer Guide.

    *)
  14. attributes_to_get : string list option;
    (*

    This is a legacy parameter. Use ProjectionExpression instead. For more information, see AttributesToGet in the Amazon DynamoDB Developer Guide.

    *)
  15. select : select option;
    (*

    The attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index.

    • ALL_ATTRIBUTES - Returns all of the item attributes from the specified table or index. If you query a local secondary index, then for each matching item in the index, DynamoDB fetches the entire item from the parent table. If the index is configured to project all item attributes, then all of the data can be obtained from the local secondary index, and no fetching is required.
    • ALL_PROJECTED_ATTRIBUTES - Allowed only when querying an index. Retrieves all attributes that have been projected into the index. If the index is configured to project all attributes, this return value is equivalent to specifying ALL_ATTRIBUTES.
    • COUNT - Returns the number of matching items, rather than the matching items themselves. Note that this uses the same quantity of read capacity units as getting the items, and is subject to the same item size calculations.
    • SPECIFIC_ATTRIBUTES - Returns only the attributes listed in ProjectionExpression. This return value is equivalent to specifying ProjectionExpression without specifying any value for Select.

      If you query or scan a local secondary index and request only attributes that are projected into that index, the operation will read only the index and not the table. If any of the requested attributes are not projected into the local secondary index, DynamoDB fetches each of these attributes from the parent table. This extra fetching incurs additional throughput cost and latency.

      If you query or scan a global secondary index, you can only request attributes that are projected into the index. Global secondary index queries cannot fetch attributes from the parent table.

    If neither Select nor ProjectionExpression are specified, DynamoDB defaults to ALL_ATTRIBUTES when accessing a table, and ALL_PROJECTED_ATTRIBUTES when accessing an index. You cannot use both Select and ProjectionExpression together in a single request, unless the value for Select is SPECIFIC_ATTRIBUTES. (This usage is equivalent to specifying ProjectionExpression without any value for Select.)

    If you use the ProjectionExpression parameter, then the value for Select can only be SPECIFIC_ATTRIBUTES. Any other value for Select will return an error.

    *)
  16. index_name : string option;
    (*

    The name of an index to query. This index can be any local secondary index or global secondary index on the table. Note that if you use the IndexName parameter, you must also provide TableName.

    *)
  17. table_name : string;
    (*

    The name of the table containing the requested items. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}

Represents the input of a Query operation.

type put_resource_policy_output = {
  1. revision_id : string option;
    (*

    A unique string that represents the revision ID of the policy. If you're comparing revision IDs, make sure to always use string comparison logic.

    *)
}
type put_resource_policy_input = {
  1. confirm_remove_self_resource_access : bool option;
    (*

    Set this parameter to true to confirm that you want to remove your permissions to change the policy of this resource in the future.

    *)
  2. expected_revision_id : string option;
    (*

    A string value that you can use to conditionally update your policy. You can provide the revision ID of your existing policy to make mutating requests against that policy.

    When you provide an expected revision ID, if the revision ID of the existing policy on the resource doesn't match or if there's no policy attached to the resource, your request will be rejected with a PolicyNotFoundException.

    To conditionally attach a policy when no policy exists for the resource, specify NO_POLICY for the revision ID.

    *)
  3. policy : string;
    (*

    An Amazon Web Services resource-based policy document in JSON format.

    • The maximum size supported for a resource-based policy document is 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against this limit.
    • Within a resource-based policy, if the action for a DynamoDB service-linked role (SLR) to replicate data for a global table is denied, adding or deleting a replica will fail with an error.

    For a full list of all considerations that apply while attaching a resource-based policy, see Resource-based policy considerations.

    *)
  4. resource_arn : string;
    (*

    The Amazon Resource Name (ARN) of the DynamoDB resource to which the policy will be attached. The resources you can specify include tables and streams.

    You can control index permissions using the base table's policy. To specify the same permission level for your table and its indexes, you can provide both the table and index Amazon Resource Name (ARN)s in the Resource field of a given Statement in your policy document. Alternatively, to specify different permissions for your table, indexes, or both, you can define multiple Statement fields in your policy document.

    *)
}
type policy_not_found_exception = {
  1. message : string option;
}

The operation tried to access a nonexistent resource-based policy.

If you specified an ExpectedRevisionId, it's possible that a policy is present for the resource but its revision ID didn't match the expected value.

type put_item_output = {
  1. item_collection_metrics : item_collection_metrics option;
    (*

    Information about item collections, if any, that were affected by the PutItem operation. ItemCollectionMetrics is only returned if the ReturnItemCollectionMetrics parameter was specified. If the table does not have any local secondary indexes, this information is not returned in the response.

    Each ItemCollectionMetrics element consists of:

    • ItemCollectionKey - The partition key value of the item collection. This is the same as the partition key value of the item itself.
    • SizeEstimateRangeGB - An estimate of item collection size, in gigabytes. This value is a two-element array containing a lower bound and an upper bound for the estimate. The estimate includes the size of all the items in the table, plus the size of all attributes projected into all of the local secondary indexes on that table. Use this estimate to measure whether a local secondary index is approaching its size limit.

      The estimate is subject to change over time; therefore, do not rely on the precision or accuracy of the estimate.

    *)
  2. consumed_capacity : consumed_capacity option;
    (*

    The capacity units consumed by the PutItem operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. ConsumedCapacity is only returned if the ReturnConsumedCapacity parameter was specified. For more information, see Capacity unity consumption for write operations in the Amazon DynamoDB Developer Guide.

    *)
  3. attributes : (string * attribute_value) list option;
    (*

    The attribute values as they appeared before the PutItem operation, but only if ReturnValues is specified as ALL_OLD in the request. Each element consists of an attribute name and an attribute value.

    *)
}

Represents the output of a PutItem operation.

type put_item_input = {
  1. return_values_on_condition_check_failure : return_values_on_condition_check_failure option;
    (*

    An optional parameter that returns the item attributes for a PutItem operation that failed a condition check.

    There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.

    *)
  2. expression_attribute_values : (string * attribute_value) list option;
    (*

    One or more values that can be substituted in an expression.

    Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

    Available | Backordered | Discontinued

    You would first need to specify ExpressionAttributeValues as follows:

    { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} }

    You could then use these values in an expression, such as this:

    ProductStatus IN (:avail, :back, :disc)

    For more information on expression attribute values, see Condition Expressions in the Amazon DynamoDB Developer Guide.

    *)
  3. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

    • To access an attribute whose name conflicts with a DynamoDB reserved word.
    • To create a placeholder for repeating occurrences of an attribute name in an expression.
    • To prevent special characters in an attribute name from being misinterpreted in an expression.

    Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

    • Percentile

    The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

    • {"#P":"Percentile"}

    You could then use this substitution in an expression, as in this example:

    • #P = :val

    Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

    For more information on expression attribute names, see Specifying Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  4. condition_expression : string option;
    (*

    A condition that must be satisfied in order for a conditional PutItem operation to succeed.

    An expression can contain any of the following:

    • Functions: attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size

      These function names are case-sensitive.

    • Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
    • Logical operators: AND | OR | NOT

    For more information on condition expressions, see Condition Expressions in the Amazon DynamoDB Developer Guide.

    *)
  5. conditional_operator : conditional_operator option;
    (*

    This is a legacy parameter. Use ConditionExpression instead. For more information, see ConditionalOperator in the Amazon DynamoDB Developer Guide.

    *)
  6. return_item_collection_metrics : return_item_collection_metrics option;
    (*

    Determines whether item collection metrics are returned. If set to SIZE, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE (the default), no statistics are returned.

    *)
  7. return_consumed_capacity : return_consumed_capacity option;
  8. return_values : return_value option;
    (*

    Use ReturnValues if you want to get the item attributes as they appeared before they were updated with the PutItem request. For PutItem, the valid values are:

    • NONE - If ReturnValues is not specified, or if its value is NONE, then nothing is returned. (This setting is the default for ReturnValues.)
    • ALL_OLD - If PutItem overwrote an attribute name-value pair, then the content of the old item is returned.

    The values returned are strongly consistent.

    There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.

    The ReturnValues parameter is used by several DynamoDB operations; however, PutItem does not recognize any values other than NONE or ALL_OLD.

    *)
  9. expected : (string * expected_attribute_value) list option;
    (*

    This is a legacy parameter. Use ConditionExpression instead. For more information, see Expected in the Amazon DynamoDB Developer Guide.

    *)
  10. item : (string * attribute_value) list;
    (*

    A map of attribute name/value pairs, one for each attribute. Only the primary key attributes are required; you can optionally provide other attribute name-value pairs for the item.

    You must provide all of the attributes for the primary key. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide both values for both the partition key and the sort key.

    If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition.

    Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index.

    For more information about primary keys, see Primary Key in the Amazon DynamoDB Developer Guide.

    Each element in the Item map is an AttributeValue object.

    *)
  11. table_name : string;
    (*

    The name of the table to contain the item. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}

Represents the input of a PutItem operation.

type batch_statement_error_code_enum =
  1. | DuplicateItem
  2. | AccessDenied
  3. | ResourceNotFound
  4. | InternalServerError
  5. | ThrottlingError
  6. | TransactionConflict
  7. | ProvisionedThroughputExceeded
  8. | ValidationError
  9. | RequestLimitExceeded
  10. | ItemCollectionSizeLimitExceeded
  11. | ConditionalCheckFailed
type batch_statement_error = {
  1. item : (string * attribute_value) list option;
    (*

    The item which caused the condition check to fail. This will be set if ReturnValuesOnConditionCheckFailure is specified as ALL_OLD.

    *)
  2. message : string option;
    (*

    The error message associated with the PartiQL batch response.

    *)
  3. code : batch_statement_error_code_enum option;
    (*

    The error code associated with the failed PartiQL batch statement.

    *)
}

An error associated with a statement in a PartiQL batch that was run.

type batch_statement_response = {
  1. item : (string * attribute_value) list option;
    (*

    A DynamoDB item associated with a BatchStatementResponse

    *)
  2. table_name : string option;
    (*

    The table name associated with a failed PartiQL batch statement.

    *)
  3. error : batch_statement_error option;
    (*

    The error associated with a failed PartiQL batch statement.

    *)
}

A PartiQL batch statement response..

type batch_statement_request = {
  1. return_values_on_condition_check_failure : return_values_on_condition_check_failure option;
    (*

    An optional parameter that returns the item attributes for a PartiQL batch request operation that failed a condition check.

    There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.

    *)
  2. consistent_read : bool option;
    (*

    The read consistency of the PartiQL batch request.

    *)
  3. parameters : attribute_value list option;
    (*

    The parameters associated with a PartiQL statement in the batch request.

    *)
  4. statement : string;
    (*

    A valid PartiQL statement.

    *)
}

A PartiQL batch statement request.

type parameterized_statement = {
  1. return_values_on_condition_check_failure : return_values_on_condition_check_failure option;
    (*

    An optional parameter that returns the item attributes for a PartiQL ParameterizedStatement operation that failed a condition check.

    There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.

    *)
  2. parameters : attribute_value list option;
    (*

    The parameter values.

    *)
  3. statement : string;
    (*

    A PartiQL statement that uses parameters.

    *)
}

Represents a PartiQL statement that uses parameters.

type list_tags_of_resource_output = {
  1. next_token : string option;
    (*

    If this value is returned, there are additional results to be displayed. To retrieve them, call ListTagsOfResource again, with NextToken set to this value.

    *)
  2. tags : tag list option;
    (*

    The tags currently associated with the Amazon DynamoDB resource.

    *)
}
type list_tags_of_resource_input = {
  1. next_token : string option;
    (*

    An optional string that, if supplied, must be copied from the output of a previous call to ListTagOfResource. When provided in this manner, this API fetches the next page of results.

    *)
  2. resource_arn : string;
    (*

    The Amazon DynamoDB resource with tags to be listed. This value is an Amazon Resource Name (ARN).

    *)
}
type list_tables_output = {
  1. last_evaluated_table_name : string option;
    (*

    The name of the last table in the current page of results. Use this value as the ExclusiveStartTableName in a new request to obtain the next page of results, until all the table names are returned.

    If you do not receive a LastEvaluatedTableName value in the response, this means that there are no more table names to be retrieved.

    *)
  2. table_names : string list option;
    (*

    The names of the tables associated with the current account at the current endpoint. The maximum size of this array is 100.

    If LastEvaluatedTableName also appears in the output, you can use this value as the ExclusiveStartTableName parameter in a subsequent ListTables request and obtain the next page of results.

    *)
}

Represents the output of a ListTables operation.

type list_tables_input = {
  1. limit : int option;
    (*

    A maximum number of table names to return. If this parameter is not specified, the limit is 100.

    *)
  2. exclusive_start_table_name : string option;
    (*

    The first table name that this operation will evaluate. Use the value that was returned for LastEvaluatedTableName in a previous operation, so that you can obtain the next page of results.

    *)
}

Represents the input of a ListTables operation.

type import_status =
  1. | FAILED
  2. | CANCELLED
  3. | CANCELLING
  4. | COMPLETED
  5. | IN_PROGRESS
type input_format =
  1. | CSV
  2. | ION
  3. | DYNAMODB_JSON
type import_summary = {
  1. end_time : float option;
    (*

    The time at which this import task ended. (Does this include the successful complete creation of the table it was imported to?)

    *)
  2. start_time : float option;
    (*

    The time at which this import task began.

    *)
  3. input_format : input_format option;
    (*

    The format of the source data. Valid values are CSV, DYNAMODB_JSON or ION.

    *)
  4. cloud_watch_log_group_arn : string option;
    (*

    The Amazon Resource Number (ARN) of the Cloudwatch Log Group associated with this import task.

    *)
  5. s3_bucket_source : s3_bucket_source option;
    (*

    The path and S3 bucket of the source file that is being imported. This includes the S3Bucket (required), S3KeyPrefix (optional) and S3BucketOwner (optional if the bucket is owned by the requester).

    *)
  6. table_arn : string option;
    (*

    The Amazon Resource Number (ARN) of the table being imported into.

    *)
  7. import_status : import_status option;
    (*

    The status of the import operation.

    *)
  8. import_arn : string option;
    (*

    The Amazon Resource Number (ARN) corresponding to the import request.

    *)
}

Summary information about the source file for the import.

type list_imports_output = {
  1. next_token : string option;
    (*

    If this value is returned, there are additional results to be displayed. To retrieve them, call ListImports again, with NextToken set to this value.

    *)
  2. import_summary_list : import_summary list option;
    (*

    A list of ImportSummary objects.

    *)
}
type list_imports_input = {
  1. next_token : string option;
    (*

    An optional string that, if supplied, must be copied from the output of a previous call to ListImports. When provided in this manner, the API fetches the next page of results.

    *)
  2. page_size : int option;
    (*

    The number of ImportSummary objects returned in a single page.

    *)
  3. table_arn : string option;
    (*

    The Amazon Resource Name (ARN) associated with the table that was imported to.

    *)
}
type global_table = {
  1. replication_group : replica list option;
    (*

    The Regions where the global table has replicas.

    *)
  2. global_table_name : string option;
    (*

    The global table name.

    *)
}

Represents the properties of a global table.

type list_global_tables_output = {
  1. last_evaluated_global_table_name : string option;
    (*

    Last evaluated global table name.

    *)
  2. global_tables : global_table list option;
    (*

    List of global table names.

    *)
}
type list_global_tables_input = {
  1. region_name : string option;
    (*

    Lists the global tables in a specific Region.

    *)
  2. limit : int option;
    (*

    The maximum number of table names to return, if the parameter is not specified DynamoDB defaults to 100.

    If the number of global tables DynamoDB finds reaches this limit, it stops the operation and returns the table names collected up to that point, with a table name in the LastEvaluatedGlobalTableName to apply in a subsequent operation to the ExclusiveStartGlobalTableName parameter.

    *)
  3. exclusive_start_global_table_name : string option;
    (*

    The first global table name that this operation will evaluate.

    *)
}
type export_status =
  1. | FAILED
  2. | COMPLETED
  3. | IN_PROGRESS
type export_type =
  1. | INCREMENTAL_EXPORT
  2. | FULL_EXPORT
type export_summary = {
  1. export_type : export_type option;
    (*

    The type of export that was performed. Valid values are FULL_EXPORT or INCREMENTAL_EXPORT.

    *)
  2. export_status : export_status option;
    (*

    Export can be in one of the following states: IN_PROGRESS, COMPLETED, or FAILED.

    *)
  3. export_arn : string option;
    (*

    The Amazon Resource Name (ARN) of the export.

    *)
}

Summary information about an export task.

type list_exports_output = {
  1. next_token : string option;
    (*

    If this value is returned, there are additional results to be displayed. To retrieve them, call ListExports again, with NextToken set to this value.

    *)
  2. export_summaries : export_summary list option;
    (*

    A list of ExportSummary objects.

    *)
}
type list_exports_input = {
  1. next_token : string option;
    (*

    An optional string that, if supplied, must be copied from the output of a previous call to ListExports. When provided in this manner, the API fetches the next page of results.

    *)
  2. max_results : int option;
    (*

    Maximum number of results to return per page.

    *)
  3. table_arn : string option;
    (*

    The Amazon Resource Name (ARN) associated with the exported table.

    *)
}
type contributor_insights_summary = {
  1. contributor_insights_status : contributor_insights_status option;
    (*

    Describes the current status for contributor insights for the given table and index, if applicable.

    *)
  2. index_name : string option;
    (*

    Name of the index associated with the summary, if any.

    *)
  3. table_name : string option;
    (*

    Name of the table associated with the summary.

    *)
}

Represents a Contributor Insights summary entry.

type list_contributor_insights_output = {
  1. next_token : string option;
    (*

    A token to go to the next page if there is one.

    *)
  2. contributor_insights_summaries : contributor_insights_summary list option;
    (*

    A list of ContributorInsightsSummary.

    *)
}
type list_contributor_insights_input = {
  1. max_results : int option;
    (*

    Maximum number of results to return per page.

    *)
  2. next_token : string option;
    (*

    A token to for the desired page, if there is one.

    *)
  3. table_name : string option;
    (*

    The name of the table. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type backup_status =
  1. | AVAILABLE
  2. | DELETED
  3. | CREATING
type backup_type =
  1. | AWS_BACKUP
  2. | SYSTEM
  3. | USER
type backup_summary = {
  1. backup_size_bytes : int option;
    (*

    Size of the backup in bytes.

    *)
  2. backup_type : backup_type option;
    (*

    BackupType:

    • USER - You create and manage these using the on-demand backup feature.
    • SYSTEM - If you delete a table with point-in-time recovery enabled, a SYSTEM backup is automatically created and is retained for 35 days (at no additional cost). System backups allow you to restore the deleted table to the state it was in just before the point of deletion.
    • AWS_BACKUP - On-demand backup created by you from Backup service.
    *)
  3. backup_status : backup_status option;
    (*

    Backup can be in one of the following states: CREATING, ACTIVE, DELETED.

    *)
  4. backup_expiry_date_time : float option;
    (*

    Time at which the automatic on-demand backup created by DynamoDB will expire. This SYSTEM on-demand backup expires automatically 35 days after its creation.

    *)
  5. backup_creation_date_time : float option;
    (*

    Time at which the backup was created.

    *)
  6. backup_name : string option;
    (*

    Name of the specified backup.

    *)
  7. backup_arn : string option;
    (*

    ARN associated with the backup.

    *)
  8. table_arn : string option;
    (*

    ARN associated with the table.

    *)
  9. table_id : string option;
    (*

    Unique identifier for the table.

    *)
  10. table_name : string option;
    (*

    Name of the table.

    *)
}

Contains details for the backup.

type list_backups_output = {
  1. last_evaluated_backup_arn : string option;
    (*

    The ARN of the backup last evaluated when the current page of results was returned, inclusive of the current page of results. This value may be specified as the ExclusiveStartBackupArn of a new ListBackups operation in order to fetch the next page of results.

    If LastEvaluatedBackupArn is empty, then the last page of results has been processed and there are no more results to be retrieved.

    If LastEvaluatedBackupArn is not empty, this may or may not indicate that there is more data to be returned. All results are guaranteed to have been returned if and only if no value for LastEvaluatedBackupArn is returned.

    *)
  2. backup_summaries : backup_summary list option;
    (*

    List of BackupSummary objects.

    *)
}
type backup_type_filter =
  1. | ALL
  2. | AWS_BACKUP
  3. | SYSTEM
  4. | USER
type list_backups_input = {
  1. backup_type : backup_type_filter option;
    (*

    The backups from the table specified by BackupType are listed.

    Where BackupType can be:

    • USER - On-demand backup created by you. (The default setting if no other backup types are specified.)
    • SYSTEM - On-demand backup automatically created by DynamoDB.
    • ALL - All types of on-demand backups (USER and SYSTEM).
    *)
  2. exclusive_start_backup_arn : string option;
    (*

    LastEvaluatedBackupArn is the Amazon Resource Name (ARN) of the backup last evaluated when the current page of results was returned, inclusive of the current page of results. This value may be specified as the ExclusiveStartBackupArn of a new ListBackups operation in order to fetch the next page of results.

    *)
  3. time_range_upper_bound : float option;
    (*

    Only backups created before this time are listed. TimeRangeUpperBound is exclusive.

    *)
  4. time_range_lower_bound : float option;
    (*

    Only backups created after this time are listed. TimeRangeLowerBound is inclusive.

    *)
  5. limit : int option;
    (*

    Maximum number of backups to return at once.

    *)
  6. table_name : string option;
    (*

    Lists the backups from the table specified in TableName. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type enable_kinesis_streaming_configuration = {
  1. approximate_creation_date_time_precision : approximate_creation_date_time_precision option;
    (*

    Toggle for the precision of Kinesis data stream timestamp. The values are either MILLISECOND or MICROSECOND.

    *)
}

Enables setting the configuration for Kinesis Streaming.

type kinesis_streaming_destination_output = {
  1. enable_kinesis_streaming_configuration : enable_kinesis_streaming_configuration option;
    (*

    The destination for the Kinesis streaming information that is being enabled.

    *)
  2. destination_status : destination_status option;
    (*

    The current status of the replication.

    *)
  3. stream_arn : string option;
    (*

    The ARN for the specific Kinesis data stream.

    *)
  4. table_name : string option;
    (*

    The name of the table being modified.

    *)
}
type kinesis_streaming_destination_input = {
  1. enable_kinesis_streaming_configuration : enable_kinesis_streaming_configuration option;
    (*

    The source for the Kinesis streaming information that is being enabled.

    *)
  2. stream_arn : string;
    (*

    The ARN for a Kinesis data stream.

    *)
  3. table_name : string;
    (*

    The name of the DynamoDB table. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type kinesis_data_stream_destination = {
  1. approximate_creation_date_time_precision : approximate_creation_date_time_precision option;
    (*

    The precision of the Kinesis data stream timestamp. The values are either MILLISECOND or MICROSECOND.

    *)
  2. destination_status_description : string option;
    (*

    The human-readable string that corresponds to the replica status.

    *)
  3. destination_status : destination_status option;
    (*

    The current status of replication.

    *)
  4. stream_arn : string option;
    (*

    The ARN for a specific Kinesis data stream.

    *)
}

Describes a Kinesis data stream destination.

type keys_and_attributes = {
  1. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

    • To access an attribute whose name conflicts with a DynamoDB reserved word.
    • To create a placeholder for repeating occurrences of an attribute name in an expression.
    • To prevent special characters in an attribute name from being misinterpreted in an expression.

    Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

    • Percentile

    The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

    • {"#P":"Percentile"}

    You could then use this substitution in an expression, as in this example:

    • #P = :val

    Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

    For more information on expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  2. projection_expression : string option;
    (*

    A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the ProjectionExpression must be separated by commas.

    If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

    For more information, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  3. consistent_read : bool option;
    (*

    The consistency of a read operation. If set to true, then a strongly consistent read is used; otherwise, an eventually consistent read is used.

    *)
  4. attributes_to_get : string list option;
    (*

    This is a legacy parameter. Use ProjectionExpression instead. For more information, see Legacy Conditional Parameters in the Amazon DynamoDB Developer Guide.

    *)
  5. keys : (string * attribute_value) list list;
    (*

    The primary key attribute values that define the items and the attributes associated with the items.

    *)
}

Represents a set of primary keys and, for each key, the attributes to retrieve from the table.

For each primary key, you must provide all of the key attributes. For example, with a simple primary key, you only need to provide the partition key. For a composite primary key, you must provide both the partition key and the sort key.

type invalid_export_time_exception = {
  1. message : string option;
}

The specified ExportTime is outside of the point in time recovery window.

type csv_options = {
  1. header_list : string list option;
    (*

    List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.

    *)
  2. delimiter : string option;
    (*

    The delimiter used for separating items in the CSV file being imported.

    *)
}

Processing options for the CSV file being imported.

type input_format_options = {
  1. csv : csv_options option;
    (*

    The options for imported source files in CSV format. The values are Delimiter and HeaderList.

    *)
}

The format options for the data that was imported into the target table. There is one value, CsvOption.

type input_compression_type =
  1. | NONE
  2. | ZSTD
  3. | GZIP
type export_view_type =
  1. | NEW_AND_OLD_IMAGES
  2. | NEW_IMAGE
type incremental_export_specification = {
  1. export_view_type : export_view_type option;
    (*

    The view type that was chosen for the export. Valid values are NEW_AND_OLD_IMAGES and NEW_IMAGES. The default value is NEW_AND_OLD_IMAGES.

    *)
  2. export_to_time : float option;
    (*

    Time in the past which provides the exclusive end range for the export table's data, counted in seconds from the start of the Unix epoch. The incremental export will reflect the table's state just prior to this point in time. If this is not provided, the latest time with data available will be used.

    *)
  3. export_from_time : float option;
    (*

    Time in the past which provides the inclusive start range for the export table's data, counted in seconds from the start of the Unix epoch. The incremental export will reflect the table's state including and after this point in time.

    *)
}

Optional object containing the parameters specific to an incremental export.

type import_table_description = {
  1. failure_message : string option;
    (*

    The error message corresponding to the failure that the import job ran into during execution.

    *)
  2. failure_code : string option;
    (*

    The error code corresponding to the failure that the import job ran into during execution.

    *)
  3. imported_item_count : int option;
    (*

    The number of items successfully imported into the new table.

    *)
  4. processed_item_count : int option;
    (*

    The total number of items processed from the source file.

    *)
  5. processed_size_bytes : int option;
    (*

    The total size of data processed from the source file, in Bytes.

    *)
  6. end_time : float option;
    (*

    The time at which the creation of the table associated with this import task completed.

    *)
  7. start_time : float option;
    (*

    The time when this import task started.

    *)
  8. table_creation_parameters : table_creation_parameters option;
    (*

    The parameters for the new table that is being imported into.

    *)
  9. input_compression_type : input_compression_type option;
    (*

    The compression options for the data that has been imported into the target table. The values are NONE, GZIP, or ZSTD.

    *)
  10. input_format_options : input_format_options option;
    (*

    The format options for the data that was imported into the target table. There is one value, CsvOption.

    *)
  11. input_format : input_format option;
    (*

    The format of the source data going into the target table.

    *)
  12. cloud_watch_log_group_arn : string option;
    (*

    The Amazon Resource Number (ARN) of the Cloudwatch Log Group associated with the target table.

    *)
  13. error_count : int option;
    (*

    The number of errors occurred on importing the source file into the target table.

    *)
  14. s3_bucket_source : s3_bucket_source option;
    (*

    Values for the S3 bucket the source file is imported from. Includes bucket name (required), key prefix (optional) and bucket account owner ID (optional).

    *)
  15. client_token : string option;
    (*

    The client token that was provided for the import task. Reusing the client token on retry makes a call to ImportTable idempotent.

    *)
  16. table_id : string option;
    (*

    The table id corresponding to the table created by import table process.

    *)
  17. table_arn : string option;
    (*

    The Amazon Resource Number (ARN) of the table being imported into.

    *)
  18. import_status : import_status option;
    (*

    The status of the import.

    *)
  19. import_arn : string option;
    (*

    The Amazon Resource Number (ARN) corresponding to the import request.

    *)
}

Represents the properties of the table being imported into.

type import_table_output = {
  1. import_table_description : import_table_description;
    (*

    Represents the properties of the table created for the import, and parameters of the import. The import parameters include import status, how many items were processed, and how many errors were encountered.

    *)
}
type import_table_input = {
  1. table_creation_parameters : table_creation_parameters;
    (*

    Parameters for the table to import the data into.

    *)
  2. input_compression_type : input_compression_type option;
    (*

    Type of compression to be used on the input coming from the imported table.

    *)
  3. input_format_options : input_format_options option;
    (*

    Additional properties that specify how the input is formatted,

    *)
  4. input_format : input_format;
    (*

    The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.

    *)
  5. s3_bucket_source : s3_bucket_source;
    (*

    The S3 bucket that provides the source for the import.

    *)
  6. client_token : string option;
    (*

    Providing a ClientToken makes the call to ImportTableInput idempotent, meaning that multiple identical calls have the same effect as one single call.

    A client token is valid for 8 hours after the first request that uses it is completed. After 8 hours, any request with the same client token is treated as a new request. Do not resubmit the same request with the same client token for more than 8 hours, or the result might not be idempotent.

    If you submit a request with the same client token but a change in other parameters within the 8-hour idempotency window, DynamoDB returns an IdempotentParameterMismatch exception.

    *)
}
type import_conflict_exception = {
  1. message : string option;
}

There was a conflict when importing from the specified S3 source. This can occur when the current import conflicts with a previous import request that had the same client token.

type import_not_found_exception = {
  1. message : string option;
}

The specified import was not found.

type global_table_already_exists_exception = {
  1. message : string option;
}

The specified global table already exists.

type get_resource_policy_output = {
  1. revision_id : string option;
    (*

    A unique string that represents the revision ID of the policy. If you're comparing revision IDs, make sure to always use string comparison logic.

    *)
  2. policy : string option;
    (*

    The resource-based policy document attached to the resource, which can be a table or stream, in JSON format.

    *)
}
type get_resource_policy_input = {
  1. resource_arn : string;
    (*

    The Amazon Resource Name (ARN) of the DynamoDB resource to which the policy is attached. The resources you can specify include tables and streams.

    *)
}
type get_item_output = {
  1. consumed_capacity : consumed_capacity option;
    (*

    The capacity units consumed by the GetItem operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. ConsumedCapacity is only returned if the ReturnConsumedCapacity parameter was specified. For more information, see Capacity unit consumption for read operations in the Amazon DynamoDB Developer Guide.

    *)
  2. item : (string * attribute_value) list option;
    (*

    A map of attribute names to AttributeValue objects, as specified by ProjectionExpression.

    *)
}

Represents the output of a GetItem operation.

type get_item_input = {
  1. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

    • To access an attribute whose name conflicts with a DynamoDB reserved word.
    • To create a placeholder for repeating occurrences of an attribute name in an expression.
    • To prevent special characters in an attribute name from being misinterpreted in an expression.

    Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

    • Percentile

    The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

    • {"#P":"Percentile"}

    You could then use this substitution in an expression, as in this example:

    • #P = :val

    Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

    For more information on expression attribute names, see Specifying Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  2. projection_expression : string option;
    (*

    A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.

    If no attribute names are specified, then all attributes are returned. If any of the requested attributes are not found, they do not appear in the result.

    For more information, see Specifying Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  3. return_consumed_capacity : return_consumed_capacity option;
  4. consistent_read : bool option;
    (*

    Determines the read consistency model: If set to true, then the operation uses strongly consistent reads; otherwise, the operation uses eventually consistent reads.

    *)
  5. attributes_to_get : string list option;
    (*

    This is a legacy parameter. Use ProjectionExpression instead. For more information, see AttributesToGet in the Amazon DynamoDB Developer Guide.

    *)
  6. key : (string * attribute_value) list;
    (*

    A map of attribute names to AttributeValue objects, representing the primary key of the item to retrieve.

    For the primary key, you must provide all of the attributes. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide values for both the partition key and the sort key.

    *)
  7. table_name : string;
    (*

    The name of the table containing the requested item. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}

Represents the input of a GetItem operation.

type failure_exception = {
  1. exception_description : string option;
    (*

    Description of the failure.

    *)
  2. exception_name : string option;
    (*

    Exception name.

    *)
}

Represents a failure a contributor insights operation.

type export_format =
  1. | ION
  2. | DYNAMODB_JSON
type export_description = {
  1. incremental_export_specification : incremental_export_specification option;
    (*

    Optional object containing the parameters specific to an incremental export.

    *)
  2. export_type : export_type option;
    (*

    The type of export that was performed. Valid values are FULL_EXPORT or INCREMENTAL_EXPORT.

    *)
  3. item_count : int option;
    (*

    The number of items exported.

    *)
  4. billed_size_bytes : int option;
    (*

    The billable size of the table export.

    *)
  5. export_format : export_format option;
    (*

    The format of the exported data. Valid values for ExportFormat are DYNAMODB_JSON or ION.

    *)
  6. failure_message : string option;
    (*

    Export failure reason description.

    *)
  7. failure_code : string option;
    (*

    Status code for the result of the failed export.

    *)
  8. s3_sse_kms_key_id : string option;
    (*

    The ID of the KMS managed key used to encrypt the S3 bucket where export data is stored (if applicable).

    *)
  9. s3_sse_algorithm : s3_sse_algorithm option;
    (*

    Type of encryption used on the bucket where export data is stored. Valid values for S3SseAlgorithm are:

    • AES256 - server-side encryption with Amazon S3 managed keys
    • KMS - server-side encryption with KMS managed keys
    *)
  10. s3_prefix : string option;
    (*

    The Amazon S3 bucket prefix used as the file name and path of the exported snapshot.

    *)
  11. s3_bucket_owner : string option;
    (*

    The ID of the Amazon Web Services account that owns the bucket containing the export.

    *)
  12. s3_bucket : string option;
    (*

    The name of the Amazon S3 bucket containing the export.

    *)
  13. client_token : string option;
    (*

    The client token that was provided for the export task. A client token makes calls to ExportTableToPointInTimeInput idempotent, meaning that multiple identical calls have the same effect as one single call.

    *)
  14. export_time : float option;
    (*

    Point in time from which table data was exported.

    *)
  15. table_id : string option;
    (*

    Unique ID of the table that was exported.

    *)
  16. table_arn : string option;
    (*

    The Amazon Resource Name (ARN) of the table that was exported.

    *)
  17. export_manifest : string option;
    (*

    The name of the manifest file for the export task.

    *)
  18. end_time : float option;
    (*

    The time at which the export task completed.

    *)
  19. start_time : float option;
    (*

    The time at which the export task began.

    *)
  20. export_status : export_status option;
    (*

    Export can be in one of the following states: IN_PROGRESS, COMPLETED, or FAILED.

    *)
  21. export_arn : string option;
    (*

    The Amazon Resource Name (ARN) of the table export.

    *)
}

Represents the properties of the exported table.

type export_table_to_point_in_time_output = {
  1. export_description : export_description option;
    (*

    Contains a description of the table export.

    *)
}
type export_table_to_point_in_time_input = {
  1. incremental_export_specification : incremental_export_specification option;
    (*

    Optional object containing the parameters specific to an incremental export.

    *)
  2. export_type : export_type option;
    (*

    Choice of whether to execute as a full export or incremental export. Valid values are FULL_EXPORT or INCREMENTAL_EXPORT. The default value is FULL_EXPORT. If INCREMENTAL_EXPORT is provided, the IncrementalExportSpecification must also be used.

    *)
  3. export_format : export_format option;
    (*

    The format for the exported data. Valid values for ExportFormat are DYNAMODB_JSON or ION.

    *)
  4. s3_sse_kms_key_id : string option;
    (*

    The ID of the KMS managed key used to encrypt the S3 bucket where export data will be stored (if applicable).

    *)
  5. s3_sse_algorithm : s3_sse_algorithm option;
    (*

    Type of encryption used on the bucket where export data will be stored. Valid values for S3SseAlgorithm are:

    • AES256 - server-side encryption with Amazon S3 managed keys
    • KMS - server-side encryption with KMS managed keys
    *)
  6. s3_prefix : string option;
    (*

    The Amazon S3 bucket prefix to use as the file name and path of the exported snapshot.

    *)
  7. s3_bucket_owner : string option;
    (*

    The ID of the Amazon Web Services account that owns the bucket the export will be stored in.

    S3BucketOwner is a required parameter when exporting to a S3 bucket in another account.

    *)
  8. s3_bucket : string;
    (*

    The name of the Amazon S3 bucket to export the snapshot to.

    *)
  9. client_token : string option;
    (*

    Providing a ClientToken makes the call to ExportTableToPointInTimeInput idempotent, meaning that multiple identical calls have the same effect as one single call.

    A client token is valid for 8 hours after the first request that uses it is completed. After 8 hours, any request with the same client token is treated as a new request. Do not resubmit the same request with the same client token for more than 8 hours, or the result might not be idempotent.

    If you submit a request with the same client token but a change in other parameters within the 8-hour idempotency window, DynamoDB returns an ImportConflictException.

    *)
  10. export_time : float option;
    (*

    Time in the past from which to export table data, counted in seconds from the start of the Unix epoch. The table export will be a snapshot of the table's state at this point in time.

    *)
  11. table_arn : string;
    (*

    The Amazon Resource Name (ARN) associated with the table to export.

    *)
}
type export_conflict_exception = {
  1. message : string option;
}

There was a conflict when writing to the specified S3 bucket.

type export_not_found_exception = {
  1. message : string option;
}

The specified export was not found.

type execute_transaction_output = {
  1. consumed_capacity : consumed_capacity list option;
    (*

    The capacity units consumed by the entire operation. The values of the list are ordered according to the ordering of the statements.

    *)
  2. responses : item_response list option;
    (*

    The response to a PartiQL transaction.

    *)
}
type execute_transaction_input = {
  1. return_consumed_capacity : return_consumed_capacity option;
    (*

    Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response. For more information, see TransactGetItems and TransactWriteItems.

    *)
  2. client_request_token : string option;
    (*

    Set this value to get remaining results, if NextToken was returned in the statement response.

    *)
  3. transact_statements : parameterized_statement list;
    (*

    The list of PartiQL statements representing the transaction to run.

    *)
}
type execute_statement_output = {
  1. last_evaluated_key : (string * attribute_value) list option;
    (*

    The primary key of the item where the operation stopped, inclusive of the previous result set. Use this value to start a new operation, excluding this value in the new request. If LastEvaluatedKey is empty, then the "last page" of results has been processed and there is no more data to be retrieved. If LastEvaluatedKey is not empty, it does not necessarily mean that there is more data in the result set. The only way to know when you have reached the end of the result set is when LastEvaluatedKey is empty.

    *)
  2. consumed_capacity : consumed_capacity option;
  3. next_token : string option;
    (*

    If the response of a read request exceeds the response payload limit DynamoDB will set this value in the response. If set, you can use that this value in the subsequent request to get the remaining results.

    *)
  4. items : (string * attribute_value) list list option;
    (*

    If a read operation was used, this property will contain the result of the read operation; a map of attribute names and their values. For the write operations this value will be empty.

    *)
}
type execute_statement_input = {
  1. return_values_on_condition_check_failure : return_values_on_condition_check_failure option;
    (*

    An optional parameter that returns the item attributes for an ExecuteStatement operation that failed a condition check.

    There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.

    *)
  2. limit : int option;
    (*

    The maximum number of items to evaluate (not necessarily the number of matching items). If DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, along with a key in LastEvaluatedKey to apply in a subsequent operation so you can pick up where you left off. Also, if the processed dataset size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a key in LastEvaluatedKey to apply in a subsequent operation to continue the operation.

    *)
  3. return_consumed_capacity : return_consumed_capacity option;
  4. next_token : string option;
    (*

    Set this value to get remaining results, if NextToken was returned in the statement response.

    *)
  5. consistent_read : bool option;
    (*

    The consistency of a read operation. If set to true, then a strongly consistent read is used; otherwise, an eventually consistent read is used.

    *)
  6. parameters : attribute_value list option;
    (*

    The parameters for the PartiQL statement, if any.

    *)
  7. statement : string;
    (*

    The PartiQL statement representing the operation to run.

    *)
}
type duplicate_item_exception = {
  1. message : string option;
}

There was an attempt to insert an item with the same primary key as an item that already exists in the DynamoDB table.

type endpoint = {
  1. cache_period_in_minutes : int;
    (*

    Endpoint cache time to live (TTL) value.

    *)
  2. address : string;
    (*

    IP address of the endpoint.

    *)
}

An endpoint information details.

type describe_time_to_live_output = {
  1. time_to_live_description : time_to_live_description option;
}
type describe_time_to_live_input = {
  1. table_name : string;
    (*

    The name of the table to be described. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type describe_table_replica_auto_scaling_output = {
  1. table_auto_scaling_description : table_auto_scaling_description option;
    (*

    Represents the auto scaling properties of the table.

    *)
}
type describe_table_replica_auto_scaling_input = {
  1. table_name : string;
    (*

    The name of the table. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type describe_table_output = {
  1. table : table_description option;
    (*

    The properties of the table.

    *)
}

Represents the output of a DescribeTable operation.

type describe_table_input = {
  1. table_name : string;
    (*

    The name of the table to describe. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}

Represents the input of a DescribeTable operation.

type describe_limits_output = {
  1. table_max_write_capacity_units : int option;
    (*

    The maximum write capacity units that your account allows you to provision for a new table that you are creating in this Region, including the write capacity units provisioned for its global secondary indexes (GSIs).

    *)
  2. table_max_read_capacity_units : int option;
    (*

    The maximum read capacity units that your account allows you to provision for a new table that you are creating in this Region, including the read capacity units provisioned for its global secondary indexes (GSIs).

    *)
  3. account_max_write_capacity_units : int option;
    (*

    The maximum total write capacity units that your account allows you to provision across all of your tables in this Region.

    *)
  4. account_max_read_capacity_units : int option;
    (*

    The maximum total read capacity units that your account allows you to provision across all of your tables in this Region.

    *)
}

Represents the output of a DescribeLimits operation.

type describe_limits_input = unit

Represents the input of a DescribeLimits operation. Has no content.

type describe_kinesis_streaming_destination_output = {
  1. kinesis_data_stream_destinations : kinesis_data_stream_destination list option;
    (*

    The list of replica structures for the table being described.

    *)
  2. table_name : string option;
    (*

    The name of the table being described.

    *)
}
type describe_kinesis_streaming_destination_input = {
  1. table_name : string;
    (*

    The name of the table being described. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type describe_import_output = {
  1. import_table_description : import_table_description;
    (*

    Represents the properties of the table created for the import, and parameters of the import. The import parameters include import status, how many items were processed, and how many errors were encountered.

    *)
}
type describe_import_input = {
  1. import_arn : string;
    (*

    The Amazon Resource Name (ARN) associated with the table you're importing to.

    *)
}
type describe_global_table_settings_output = {
  1. replica_settings : replica_settings_description list option;
    (*

    The Region-specific settings for the global table.

    *)
  2. global_table_name : string option;
    (*

    The name of the global table.

    *)
}
type describe_global_table_settings_input = {
  1. global_table_name : string;
    (*

    The name of the global table to describe.

    *)
}
type describe_global_table_output = {
  1. global_table_description : global_table_description option;
    (*

    Contains the details of the global table.

    *)
}
type describe_global_table_input = {
  1. global_table_name : string;
    (*

    The name of the global table.

    *)
}
type describe_export_output = {
  1. export_description : export_description option;
    (*

    Represents the properties of the export.

    *)
}
type describe_export_input = {
  1. export_arn : string;
    (*

    The Amazon Resource Name (ARN) associated with the export.

    *)
}
type describe_endpoints_response = {
  1. endpoints : endpoint list;
    (*

    List of endpoints.

    *)
}
type describe_endpoints_request = unit
type describe_contributor_insights_output = {
  1. failure_exception : failure_exception option;
    (*

    Returns information about the last failure that was encountered.

    The most common exceptions for a FAILED status are:

    • LimitExceededException - Per-account Amazon CloudWatch Contributor Insights rule limit reached. Please disable Contributor Insights for other tables/indexes OR disable Contributor Insights rules before retrying.
    • AccessDeniedException - Amazon CloudWatch Contributor Insights rules cannot be modified due to insufficient permissions.
    • AccessDeniedException - Failed to create service-linked role for Contributor Insights due to insufficient permissions.
    • InternalServerError - Failed to create Amazon CloudWatch Contributor Insights rules. Please retry request.
    *)
  2. last_update_date_time : float option;
    (*

    Timestamp of the last time the status was changed.

    *)
  3. contributor_insights_status : contributor_insights_status option;
    (*

    Current status of contributor insights.

    *)
  4. contributor_insights_rule_list : string list option;
    (*

    List of names of the associated contributor insights rules.

    *)
  5. index_name : string option;
    (*

    The name of the global secondary index being described.

    *)
  6. table_name : string option;
    (*

    The name of the table being described.

    *)
}
type describe_contributor_insights_input = {
  1. index_name : string option;
    (*

    The name of the global secondary index to describe, if applicable.

    *)
  2. table_name : string;
    (*

    The name of the table to describe. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type describe_continuous_backups_output = {
  1. continuous_backups_description : continuous_backups_description option;
    (*

    Represents the continuous backups and point in time recovery settings on the table.

    *)
}
type describe_continuous_backups_input = {
  1. table_name : string;
    (*

    Name of the table for which the customer wants to check the continuous backups and point in time recovery settings.

    You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type backup_details = {
  1. backup_expiry_date_time : float option;
    (*

    Time at which the automatic on-demand backup created by DynamoDB will expire. This SYSTEM on-demand backup expires automatically 35 days after its creation.

    *)
  2. backup_creation_date_time : float;
    (*

    Time at which the backup was created. This is the request time of the backup.

    *)
  3. backup_type : backup_type;
    (*

    BackupType:

    • USER - You create and manage these using the on-demand backup feature.
    • SYSTEM - If you delete a table with point-in-time recovery enabled, a SYSTEM backup is automatically created and is retained for 35 days (at no additional cost). System backups allow you to restore the deleted table to the state it was in just before the point of deletion.
    • AWS_BACKUP - On-demand backup created by you from Backup service.
    *)
  4. backup_status : backup_status;
    (*

    Backup can be in one of the following states: CREATING, ACTIVE, DELETED.

    *)
  5. backup_size_bytes : int option;
    (*

    Size of the backup in bytes. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.

    *)
  6. backup_name : string;
    (*

    Name of the requested backup.

    *)
  7. backup_arn : string;
    (*

    ARN associated with the backup.

    *)
}

Contains the details of the backup created for the table.

type backup_description = {
  1. source_table_feature_details : source_table_feature_details option;
    (*

    Contains the details of the features enabled on the table when the backup was created. For example, LSIs, GSIs, streams, TTL.

    *)
  2. source_table_details : source_table_details option;
    (*

    Contains the details of the table when the backup was created.

    *)
  3. backup_details : backup_details option;
    (*

    Contains the details of the backup created for the table.

    *)
}

Contains the description of the backup created for the table.

type describe_backup_output = {
  1. backup_description : backup_description option;
    (*

    Contains the description of the backup created for the table.

    *)
}
type describe_backup_input = {
  1. backup_arn : string;
    (*

    The Amazon Resource Name (ARN) associated with the backup.

    *)
}
type delete_table_output = {
  1. table_description : table_description option;
    (*

    Represents the properties of a table.

    *)
}

Represents the output of a DeleteTable operation.

type delete_table_input = {
  1. table_name : string;
    (*

    The name of the table to delete. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}

Represents the input of a DeleteTable operation.

type delete_resource_policy_output = {
  1. revision_id : string option;
    (*

    A unique string that represents the revision ID of the policy. If you're comparing revision IDs, make sure to always use string comparison logic.

    This value will be empty if you make a request against a resource without a policy.

    *)
}
type delete_resource_policy_input = {
  1. expected_revision_id : string option;
    (*

    A string value that you can use to conditionally delete your policy. When you provide an expected revision ID, if the revision ID of the existing policy on the resource doesn't match or if there's no policy attached to the resource, the request will fail and return a PolicyNotFoundException.

    *)
  2. resource_arn : string;
    (*

    The Amazon Resource Name (ARN) of the DynamoDB resource from which the policy will be removed. The resources you can specify include tables and streams. If you remove the policy of a table, it will also remove the permissions for the table's indexes defined in that policy document. This is because index permissions are defined in the table's policy.

    *)
}
type delete_item_output = {
  1. item_collection_metrics : item_collection_metrics option;
    (*

    Information about item collections, if any, that were affected by the DeleteItem operation. ItemCollectionMetrics is only returned if the ReturnItemCollectionMetrics parameter was specified. If the table does not have any local secondary indexes, this information is not returned in the response.

    Each ItemCollectionMetrics element consists of:

    • ItemCollectionKey - The partition key value of the item collection. This is the same as the partition key value of the item itself.
    • SizeEstimateRangeGB - An estimate of item collection size, in gigabytes. This value is a two-element array containing a lower bound and an upper bound for the estimate. The estimate includes the size of all the items in the table, plus the size of all attributes projected into all of the local secondary indexes on that table. Use this estimate to measure whether a local secondary index is approaching its size limit.

      The estimate is subject to change over time; therefore, do not rely on the precision or accuracy of the estimate.

    *)
  2. consumed_capacity : consumed_capacity option;
    (*

    The capacity units consumed by the DeleteItem operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. ConsumedCapacity is only returned if the ReturnConsumedCapacity parameter was specified. For more information, see Provisioned capacity mode in the Amazon DynamoDB Developer Guide.

    *)
  3. attributes : (string * attribute_value) list option;
    (*

    A map of attribute names to AttributeValue objects, representing the item as it appeared before the DeleteItem operation. This map appears in the response only if ReturnValues was specified as ALL_OLD in the request.

    *)
}

Represents the output of a DeleteItem operation.

type delete_item_input = {
  1. return_values_on_condition_check_failure : return_values_on_condition_check_failure option;
    (*

    An optional parameter that returns the item attributes for a DeleteItem operation that failed a condition check.

    There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.

    *)
  2. expression_attribute_values : (string * attribute_value) list option;
    (*

    One or more values that can be substituted in an expression.

    Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

    Available | Backordered | Discontinued

    You would first need to specify ExpressionAttributeValues as follows:

    { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} }

    You could then use these values in an expression, such as this:

    ProductStatus IN (:avail, :back, :disc)

    For more information on expression attribute values, see Condition Expressions in the Amazon DynamoDB Developer Guide.

    *)
  3. expression_attribute_names : (string * string) list option;
    (*

    One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

    • To access an attribute whose name conflicts with a DynamoDB reserved word.
    • To create a placeholder for repeating occurrences of an attribute name in an expression.
    • To prevent special characters in an attribute name from being misinterpreted in an expression.

    Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

    • Percentile

    The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

    • {"#P":"Percentile"}

    You could then use this substitution in an expression, as in this example:

    • #P = :val

    Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

    For more information on expression attribute names, see Specifying Item Attributes in the Amazon DynamoDB Developer Guide.

    *)
  4. condition_expression : string option;
    (*

    A condition that must be satisfied in order for a conditional DeleteItem to succeed.

    An expression can contain any of the following:

    • Functions: attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size

      These function names are case-sensitive.

    • Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
    • Logical operators: AND | OR | NOT

    For more information about condition expressions, see Condition Expressions in the Amazon DynamoDB Developer Guide.

    *)
  5. return_item_collection_metrics : return_item_collection_metrics option;
    (*

    Determines whether item collection metrics are returned. If set to SIZE, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE (the default), no statistics are returned.

    *)
  6. return_consumed_capacity : return_consumed_capacity option;
  7. return_values : return_value option;
    (*

    Use ReturnValues if you want to get the item attributes as they appeared before they were deleted. For DeleteItem, the valid values are:

    • NONE - If ReturnValues is not specified, or if its value is NONE, then nothing is returned. (This setting is the default for ReturnValues.)
    • ALL_OLD - The content of the old item is returned.

    There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.

    The ReturnValues parameter is used by several DynamoDB operations; however, DeleteItem does not recognize any values other than NONE or ALL_OLD.

    *)
  8. conditional_operator : conditional_operator option;
    (*

    This is a legacy parameter. Use ConditionExpression instead. For more information, see ConditionalOperator in the Amazon DynamoDB Developer Guide.

    *)
  9. expected : (string * expected_attribute_value) list option;
    (*

    This is a legacy parameter. Use ConditionExpression instead. For more information, see Expected in the Amazon DynamoDB Developer Guide.

    *)
  10. key : (string * attribute_value) list;
    (*

    A map of attribute names to AttributeValue objects, representing the primary key of the item to delete.

    For the primary key, you must provide all of the key attributes. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide values for both the partition key and the sort key.

    *)
  11. table_name : string;
    (*

    The name of the table from which to delete the item. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}

Represents the input of a DeleteItem operation.

type delete_backup_output = {
  1. backup_description : backup_description option;
    (*

    Contains the description of the backup created for the table.

    *)
}
type delete_backup_input = {
  1. backup_arn : string;
    (*

    The ARN associated with the backup.

    *)
}
type create_table_output = {
  1. table_description : table_description option;
    (*

    Represents the properties of the table.

    *)
}

Represents the output of a CreateTable operation.

type create_table_input = {
  1. on_demand_throughput : on_demand_throughput option;
    (*

    Sets the maximum number of read and write units for the specified table in on-demand capacity mode. If you use this parameter, you must specify MaxReadRequestUnits, MaxWriteRequestUnits, or both.

    *)
  2. resource_policy : string option;
    (*

    An Amazon Web Services resource-based policy document in JSON format that will be attached to the table.

    When you attach a resource-based policy while creating a table, the policy application is strongly consistent.

    The maximum size supported for a resource-based policy document is 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against this limit. For a full list of all considerations that apply for resource-based policies, see Resource-based policy considerations.

    You need to specify the CreateTable and PutResourcePolicy IAM actions for authorizing a user to create a table with a resource-based policy.

    *)
  3. deletion_protection_enabled : bool option;
    (*

    Indicates whether deletion protection is to be enabled (true) or disabled (false) on the table.

    *)
  4. table_class : table_class option;
    (*

    The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.

    *)
  5. tags : tag list option;
    (*

    A list of key-value pairs to label the table. For more information, see Tagging for DynamoDB.

    *)
  6. sse_specification : sse_specification option;
    (*

    Represents the settings used to enable server-side encryption.

    *)
  7. stream_specification : stream_specification option;
    (*

    The settings for DynamoDB Streams on the table. These settings consist of:

    • StreamEnabled - Indicates whether DynamoDB Streams is to be enabled (true) or disabled (false).
    • StreamViewType - When an item in the table is modified, StreamViewType determines what information is written to the table's stream. Valid values for StreamViewType are:

      • KEYS_ONLY - Only the key attributes of the modified item are written to the stream.
      • NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream.
      • OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream.
      • NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
    *)
  8. provisioned_throughput : provisioned_throughput option;
    (*

    Represents the provisioned throughput settings for a specified table or index. The settings can be modified using the UpdateTable operation.

    If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property.

    For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.

    *)
  9. billing_mode : billing_mode option;
    (*

    Controls how you are charged for read and write throughput and how you manage capacity. This setting can be changed later.

    • PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned capacity mode.
    • PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-demand capacity mode.
    *)
  10. global_secondary_indexes : global_secondary_index list option;
    (*

    One or more global secondary indexes (the maximum is 20) to be created on the table. Each global secondary index in the array includes the following:

    • IndexName - The name of the global secondary index. Must be unique only for this table.
    • KeySchema - Specifies the key schema for the global secondary index.
    • Projection - Specifies attributes that are copied (projected) from the table into the index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Each attribute specification is composed of:

      • ProjectionType - One of the following:

        • KEYS_ONLY - Only the index and primary keys are projected into the index.
        • INCLUDE - Only the specified table attributes are projected into the index. The list of projected attributes is in NonKeyAttributes.
        • ALL - All of the table attributes are projected into the index.
      • NonKeyAttributes - A list of one or more non-key attribute names that are projected into the secondary index. The total count of attributes provided in NonKeyAttributes, summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
    • ProvisionedThroughput - The provisioned throughput settings for the global secondary index, consisting of read and write capacity units.
    *)
  11. local_secondary_indexes : local_secondary_index list option;
    (*

    One or more local secondary indexes (the maximum is 5) to be created on the table. Each index is scoped to a given partition key value. There is a 10 GB size limit per partition key value; otherwise, the size of a local secondary index is unconstrained.

    Each local secondary index in the array includes the following:

    • IndexName - The name of the local secondary index. Must be unique only for this table.
    • KeySchema - Specifies the key schema for the local secondary index. The key schema must begin with the same partition key as the table.
    • Projection - Specifies attributes that are copied (projected) from the table into the index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Each attribute specification is composed of:

      • ProjectionType - One of the following:

        • KEYS_ONLY - Only the index and primary keys are projected into the index.
        • INCLUDE - Only the specified table attributes are projected into the index. The list of projected attributes is in NonKeyAttributes.
        • ALL - All of the table attributes are projected into the index.
      • NonKeyAttributes - A list of one or more non-key attribute names that are projected into the secondary index. The total count of attributes provided in NonKeyAttributes, summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
    *)
  12. key_schema : key_schema_element list;
    (*

    Specifies the attributes that make up the primary key for a table or an index. The attributes in KeySchema must also be defined in the AttributeDefinitions array. For more information, see Data Model in the Amazon DynamoDB Developer Guide.

    Each KeySchemaElement in the array is composed of:

    • AttributeName - The name of this key attribute.
    • KeyType - The role that the key attribute will assume:

      • HASH - partition key
      • RANGE - sort key

    The partition key of an item is also known as its hash attribute. The term "hash attribute" derives from the DynamoDB usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.

    The sort key of an item is also known as its range attribute. The term "range attribute" derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

    For a simple primary key (partition key), you must provide exactly one element with a KeyType of HASH.

    For a composite primary key (partition key and sort key), you must provide exactly two elements, in this order: The first element must have a KeyType of HASH, and the second element must have a KeyType of RANGE.

    For more information, see Working with Tables in the Amazon DynamoDB Developer Guide.

    *)
  13. table_name : string;
    (*

    The name of the table to create. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
  14. attribute_definitions : attribute_definition list;
    (*

    An array of attributes that describe the key schema for the table and indexes.

    *)
}

Represents the input of a CreateTable operation.

type create_global_table_output = {
  1. global_table_description : global_table_description option;
    (*

    Contains the details of the global table.

    *)
}
type create_global_table_input = {
  1. replication_group : replica list;
    (*

    The Regions where the global table needs to be created.

    *)
  2. global_table_name : string;
    (*

    The global table name.

    *)
}
type create_backup_output = {
  1. backup_details : backup_details option;
    (*

    Contains the details of the backup created for the table.

    *)
}
type create_backup_input = {
  1. backup_name : string;
    (*

    Specified name for the backup.

    *)
  2. table_name : string;
    (*

    The name of the table. You can also provide the Amazon Resource Name (ARN) of the table in this parameter.

    *)
}
type batch_write_item_output = {
  1. consumed_capacity : consumed_capacity list option;
    (*

    The capacity units consumed by the entire BatchWriteItem operation.

    Each element consists of:

    • TableName - The table that consumed the provisioned throughput.
    • CapacityUnits - The total number of capacity units consumed.
    *)
  2. item_collection_metrics : (string * item_collection_metrics list) list option;
    (*

    A list of tables that were processed by BatchWriteItem and, for each table, information about any item collections that were affected by individual DeleteItem or PutItem operations.

    Each entry consists of the following subelements:

    • ItemCollectionKey - The partition key value of the item collection. This is the same as the partition key value of the item.
    • SizeEstimateRangeGB - An estimate of item collection size, expressed in GB. This is a two-element array containing a lower bound and an upper bound for the estimate. The estimate includes the size of all the items in the table, plus the size of all attributes projected into all of the local secondary indexes on the table. Use this estimate to measure whether a local secondary index is approaching its size limit.

      The estimate is subject to change over time; therefore, do not rely on the precision or accuracy of the estimate.

    *)
  3. unprocessed_items : (string * write_request list) list option;
    (*

    A map of tables and requests against those tables that were not processed. The UnprocessedItems value is in the same form as RequestItems, so you can provide this value directly to a subsequent BatchWriteItem operation. For more information, see RequestItems in the Request Parameters section.

    Each UnprocessedItems entry consists of a table name or table ARN and, for that table, a list of operations to perform (DeleteRequest or PutRequest).

    • DeleteRequest - Perform a DeleteItem operation on the specified item. The item to be deleted is identified by a Key subelement:

      • Key - A map of primary key attribute values that uniquely identify the item. Each entry in this map consists of an attribute name and an attribute value.
    • PutRequest - Perform a PutItem operation on the specified item. The item to be put is identified by an Item subelement:

      • Item - A map of attributes and their values. Each entry in this map consists of an attribute name and an attribute value. Attribute values must not be null; string and binary type attributes must have lengths greater than zero; and set type attributes must not be empty. Requests that contain empty values will be rejected with a ValidationException exception.

        If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition.

    If there are no unprocessed items remaining, the response contains an empty UnprocessedItems map.

    *)
}

Represents the output of a BatchWriteItem operation.

type batch_write_item_input = {
  1. return_item_collection_metrics : return_item_collection_metrics option;
    (*

    Determines whether item collection metrics are returned. If set to SIZE, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE (the default), no statistics are returned.

    *)
  2. return_consumed_capacity : return_consumed_capacity option;
  3. request_items : (string * write_request list) list;
    (*

    A map of one or more table names or table ARNs and, for each table, a list of operations to be performed (DeleteRequest or PutRequest). Each element in the map consists of the following:

    • DeleteRequest - Perform a DeleteItem operation on the specified item. The item to be deleted is identified by a Key subelement:

      • Key - A map of primary key attribute values that uniquely identify the item. Each entry in this map consists of an attribute name and an attribute value. For each primary key, you must provide all of the key attributes. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide values for both the partition key and the sort key.
    • PutRequest - Perform a PutItem operation on the specified item. The item to be put is identified by an Item subelement:

      • Item - A map of attributes and their values. Each entry in this map consists of an attribute name and an attribute value. Attribute values must not be null; string and binary type attributes must have lengths greater than zero; and set type attributes must not be empty. Requests that contain empty values are rejected with a ValidationException exception.

        If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition.

    *)
}

Represents the input of a BatchWriteItem operation.

type batch_get_item_output = {
  1. consumed_capacity : consumed_capacity list option;
    (*

    The read capacity units consumed by the entire BatchGetItem operation.

    Each element consists of:

    • TableName - The table that consumed the provisioned throughput.
    • CapacityUnits - The total number of capacity units consumed.
    *)
  2. unprocessed_keys : (string * keys_and_attributes) list option;
    (*

    A map of tables and their respective keys that were not processed with the current response. The UnprocessedKeys value is in the same form as RequestItems, so the value can be provided directly to a subsequent BatchGetItem operation. For more information, see RequestItems in the Request Parameters section.

    Each element consists of:

    • Keys - An array of primary key attribute values that define specific items in the table.
    • ProjectionExpression - One or more attributes to be retrieved from the table or index. By default, all attributes are returned. If a requested attribute is not found, it does not appear in the result.
    • ConsistentRead - The consistency of a read operation. If set to true, then a strongly consistent read is used; otherwise, an eventually consistent read is used.

    If there are no unprocessed keys remaining, the response contains an empty UnprocessedKeys map.

    *)
  3. responses : (string * (string * attribute_value) list list) list option;
    (*

    A map of table name or table ARN to a list of items. Each object in Responses consists of a table name or ARN, along with a map of attribute data consisting of the data type and attribute value.

    *)
}

Represents the output of a BatchGetItem operation.

type batch_get_item_input = {
  1. return_consumed_capacity : return_consumed_capacity option;
  2. request_items : (string * keys_and_attributes) list;
    (*

    A map of one or more table names or table ARNs and, for each table, a map that describes one or more items to retrieve from that table. Each table name or ARN can be used only once per BatchGetItem request.

    Each element in the map of items to retrieve consists of the following:

    • ConsistentRead - If true, a strongly consistent read is used; if false (the default), an eventually consistent read is used.
    • ExpressionAttributeNames - One or more substitution tokens for attribute names in the ProjectionExpression parameter. The following are some use cases for using ExpressionAttributeNames:

      • To access an attribute whose name conflicts with a DynamoDB reserved word.
      • To create a placeholder for repeating occurrences of an attribute name in an expression.
      • To prevent special characters in an attribute name from being misinterpreted in an expression.

      Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

      • Percentile

      The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

      • {"#P":"Percentile"}

      You could then use this substitution in an expression, as in this example:

      • #P = :val

      Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

      For more information about expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

    • Keys - An array of primary key attribute values that define specific items in the table. For each primary key, you must provide all of the key attributes. For example, with a simple primary key, you only need to provide the partition key value. For a composite key, you must provide both the partition key value and the sort key value.
    • ProjectionExpression - A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.

      If no attribute names are specified, then all attributes are returned. If any of the requested attributes are not found, they do not appear in the result.

      For more information, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

    • AttributesToGet - This is a legacy parameter. Use ProjectionExpression instead. For more information, see AttributesToGet in the Amazon DynamoDB Developer Guide.
    *)
}

Represents the input of a BatchGetItem operation.

type batch_execute_statement_output = {
  1. consumed_capacity : consumed_capacity list option;
    (*

    The capacity units consumed by the entire operation. The values of the list are ordered according to the ordering of the statements.

    *)
  2. responses : batch_statement_response list option;
    (*

    The response to each PartiQL statement in the batch. The values of the list are ordered according to the ordering of the request statements.

    *)
}
type batch_execute_statement_input = {
  1. return_consumed_capacity : return_consumed_capacity option;
  2. statements : batch_statement_request list;
    (*

    The list of PartiQL statements representing the batch to run.

    *)
}

Amazon DynamoDB Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.

With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics.

DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability.

type base_document = Smaws_Lib.Json.t

Builders

val make_put_request : item:(string * attribute_value) list -> unit -> put_request

Create a put_request type

val make_delete_request : key:(string * attribute_value) list -> unit -> delete_request

Create a delete_request type

val make_write_request : ?delete_request:delete_request -> ?put_request:put_request -> unit -> write_request

Create a write_request type

val make_time_to_live_specification : attribute_name:string -> enabled:bool -> unit -> time_to_live_specification
val make_update_time_to_live_output : ?time_to_live_specification:time_to_live_specification -> unit -> update_time_to_live_output
val make_update_time_to_live_input : time_to_live_specification:time_to_live_specification -> table_name:string -> unit -> update_time_to_live_input
val make_auto_scaling_target_tracking_scaling_policy_configuration_description : ?scale_out_cooldown:int -> ?scale_in_cooldown:int -> ?disable_scale_in:bool -> target_value:float -> unit -> auto_scaling_target_tracking_scaling_policy_configuration_description
val make_auto_scaling_policy_description : ?target_tracking_scaling_policy_configuration: auto_scaling_target_tracking_scaling_policy_configuration_description -> ?policy_name:string -> unit -> auto_scaling_policy_description
val make_auto_scaling_settings_description : ?scaling_policies:auto_scaling_policy_description list -> ?auto_scaling_role_arn:string -> ?auto_scaling_disabled:bool -> ?maximum_units:int -> ?minimum_units:int -> unit -> auto_scaling_settings_description
val make_replica_global_secondary_index_auto_scaling_description : ?provisioned_write_capacity_auto_scaling_settings: auto_scaling_settings_description -> ?provisioned_read_capacity_auto_scaling_settings: auto_scaling_settings_description -> ?index_status:index_status -> ?index_name:string -> unit -> replica_global_secondary_index_auto_scaling_description
val make_replica_auto_scaling_description : ?replica_status:replica_status -> ?replica_provisioned_write_capacity_auto_scaling_settings: auto_scaling_settings_description -> ?replica_provisioned_read_capacity_auto_scaling_settings: auto_scaling_settings_description -> ?global_secondary_indexes: replica_global_secondary_index_auto_scaling_description list -> ?region_name:string -> unit -> replica_auto_scaling_description
val make_table_auto_scaling_description : ?replicas:replica_auto_scaling_description list -> ?table_status:table_status -> ?table_name:string -> unit -> table_auto_scaling_description
val make_update_table_replica_auto_scaling_output : ?table_auto_scaling_description:table_auto_scaling_description -> unit -> update_table_replica_auto_scaling_output
val make_auto_scaling_target_tracking_scaling_policy_configuration_update : ?scale_out_cooldown:int -> ?scale_in_cooldown:int -> ?disable_scale_in:bool -> target_value:float -> unit -> auto_scaling_target_tracking_scaling_policy_configuration_update
val make_auto_scaling_policy_update : ?policy_name:string -> target_tracking_scaling_policy_configuration: auto_scaling_target_tracking_scaling_policy_configuration_update -> unit -> auto_scaling_policy_update
val make_auto_scaling_settings_update : ?scaling_policy_update:auto_scaling_policy_update -> ?auto_scaling_role_arn:string -> ?auto_scaling_disabled:bool -> ?maximum_units:int -> ?minimum_units:int -> unit -> auto_scaling_settings_update
val make_global_secondary_index_auto_scaling_update : ?provisioned_write_capacity_auto_scaling_update:auto_scaling_settings_update -> ?index_name:string -> unit -> global_secondary_index_auto_scaling_update
val make_replica_global_secondary_index_auto_scaling_update : ?provisioned_read_capacity_auto_scaling_update:auto_scaling_settings_update -> ?index_name:string -> unit -> replica_global_secondary_index_auto_scaling_update
val make_replica_auto_scaling_update : ?replica_provisioned_read_capacity_auto_scaling_update: auto_scaling_settings_update -> ?replica_global_secondary_index_updates: replica_global_secondary_index_auto_scaling_update list -> region_name:string -> unit -> replica_auto_scaling_update
val make_update_table_replica_auto_scaling_input : ?replica_updates:replica_auto_scaling_update list -> ?provisioned_write_capacity_auto_scaling_update:auto_scaling_settings_update -> ?global_secondary_index_updates: global_secondary_index_auto_scaling_update list -> table_name:string -> unit -> update_table_replica_auto_scaling_input
val make_attribute_definition : attribute_type:scalar_attribute_type -> attribute_name:string -> unit -> attribute_definition

Create a attribute_definition type

val make_key_schema_element : key_type:key_type -> attribute_name:string -> unit -> key_schema_element

Create a key_schema_element type

val make_provisioned_throughput_description : ?write_capacity_units:int -> ?read_capacity_units:int -> ?number_of_decreases_today:int -> ?last_decrease_date_time:float -> ?last_increase_date_time:float -> unit -> provisioned_throughput_description
val make_billing_mode_summary : ?last_update_to_pay_per_request_date_time:float -> ?billing_mode:billing_mode -> unit -> billing_mode_summary

Create a billing_mode_summary type

val make_projection : ?non_key_attributes:string list -> ?projection_type:projection_type -> unit -> projection

Create a projection type

val make_local_secondary_index_description : ?index_arn:string -> ?item_count:int -> ?index_size_bytes:int -> ?projection:projection -> ?key_schema:key_schema_element list -> ?index_name:string -> unit -> local_secondary_index_description
val make_on_demand_throughput : ?max_write_request_units:int -> ?max_read_request_units:int -> unit -> on_demand_throughput

Create a on_demand_throughput type

val make_global_secondary_index_description : ?on_demand_throughput:on_demand_throughput -> ?index_arn:string -> ?item_count:int -> ?index_size_bytes:int -> ?provisioned_throughput:provisioned_throughput_description -> ?backfilling:bool -> ?index_status:index_status -> ?projection:projection -> ?key_schema:key_schema_element list -> ?index_name:string -> unit -> global_secondary_index_description
val make_stream_specification : ?stream_view_type:stream_view_type -> stream_enabled:bool -> unit -> stream_specification

Create a stream_specification type

val make_provisioned_throughput_override : ?read_capacity_units:int -> unit -> provisioned_throughput_override
val make_on_demand_throughput_override : ?max_read_request_units:int -> unit -> on_demand_throughput_override
val make_replica_global_secondary_index_description : ?on_demand_throughput_override:on_demand_throughput_override -> ?provisioned_throughput_override:provisioned_throughput_override -> ?index_name:string -> unit -> replica_global_secondary_index_description
val make_table_class_summary : ?last_update_date_time:float -> ?table_class:table_class -> unit -> table_class_summary

Create a table_class_summary type

val make_replica_description : ?replica_table_class_summary:table_class_summary -> ?replica_inaccessible_date_time:float -> ?global_secondary_indexes:replica_global_secondary_index_description list -> ?on_demand_throughput_override:on_demand_throughput_override -> ?provisioned_throughput_override:provisioned_throughput_override -> ?kms_master_key_id:string -> ?replica_status_percent_progress:string -> ?replica_status_description:string -> ?replica_status:replica_status -> ?region_name:string -> unit -> replica_description

Create a replica_description type

val make_restore_summary : ?source_table_arn:string -> ?source_backup_arn:string -> restore_in_progress:bool -> restore_date_time:float -> unit -> restore_summary

Create a restore_summary type

val make_sse_description : ?inaccessible_encryption_date_time:float -> ?kms_master_key_arn:string -> ?sse_type:sse_type -> ?status:sse_status -> unit -> sse_description

Create a sse_description type

val make_archival_summary : ?archival_backup_arn:string -> ?archival_reason:string -> ?archival_date_time:float -> unit -> archival_summary

Create a archival_summary type

val make_table_description : ?on_demand_throughput:on_demand_throughput -> ?deletion_protection_enabled:bool -> ?table_class_summary:table_class_summary -> ?archival_summary:archival_summary -> ?sse_description:sse_description -> ?restore_summary:restore_summary -> ?replicas:replica_description list -> ?global_table_version:string -> ?latest_stream_arn:string -> ?latest_stream_label:string -> ?stream_specification:stream_specification -> ?global_secondary_indexes:global_secondary_index_description list -> ?local_secondary_indexes:local_secondary_index_description list -> ?billing_mode_summary:billing_mode_summary -> ?table_id:string -> ?table_arn:string -> ?item_count:int -> ?table_size_bytes:int -> ?provisioned_throughput:provisioned_throughput_description -> ?creation_date_time:float -> ?table_status:table_status -> ?key_schema:key_schema_element list -> ?table_name:string -> ?attribute_definitions:attribute_definition list -> unit -> table_description

Create a table_description type

val make_update_table_output : ?table_description:table_description -> unit -> update_table_output

Create a update_table_output type

val make_provisioned_throughput : write_capacity_units:int -> read_capacity_units:int -> unit -> provisioned_throughput
val make_update_global_secondary_index_action : ?on_demand_throughput:on_demand_throughput -> ?provisioned_throughput:provisioned_throughput -> index_name:string -> unit -> update_global_secondary_index_action
val make_create_global_secondary_index_action : ?on_demand_throughput:on_demand_throughput -> ?provisioned_throughput:provisioned_throughput -> projection:projection -> key_schema:key_schema_element list -> index_name:string -> unit -> create_global_secondary_index_action
val make_delete_global_secondary_index_action : index_name:string -> unit -> delete_global_secondary_index_action
val make_sse_specification : ?kms_master_key_id:string -> ?sse_type:sse_type -> ?enabled:bool -> unit -> sse_specification

Create a sse_specification type

val make_replica_global_secondary_index : ?on_demand_throughput_override:on_demand_throughput_override -> ?provisioned_throughput_override:provisioned_throughput_override -> index_name:string -> unit -> replica_global_secondary_index
val make_create_replication_group_member_action : ?table_class_override:table_class -> ?global_secondary_indexes:replica_global_secondary_index list -> ?on_demand_throughput_override:on_demand_throughput_override -> ?provisioned_throughput_override:provisioned_throughput_override -> ?kms_master_key_id:string -> region_name:string -> unit -> create_replication_group_member_action
val make_update_replication_group_member_action : ?table_class_override:table_class -> ?global_secondary_indexes:replica_global_secondary_index list -> ?on_demand_throughput_override:on_demand_throughput_override -> ?provisioned_throughput_override:provisioned_throughput_override -> ?kms_master_key_id:string -> region_name:string -> unit -> update_replication_group_member_action
val make_delete_replication_group_member_action : region_name:string -> unit -> delete_replication_group_member_action
val make_update_table_input : ?on_demand_throughput:on_demand_throughput -> ?deletion_protection_enabled:bool -> ?table_class:table_class -> ?replica_updates:replication_group_update list -> ?sse_specification:sse_specification -> ?stream_specification:stream_specification -> ?global_secondary_index_updates:global_secondary_index_update list -> ?provisioned_throughput:provisioned_throughput -> ?billing_mode:billing_mode -> ?attribute_definitions:attribute_definition list -> table_name:string -> unit -> update_table_input

Create a update_table_input type

val make_update_kinesis_streaming_configuration : ?approximate_creation_date_time_precision: approximate_creation_date_time_precision -> unit -> update_kinesis_streaming_configuration
val make_update_kinesis_streaming_destination_output : ?update_kinesis_streaming_configuration: update_kinesis_streaming_configuration -> ?destination_status:destination_status -> ?stream_arn:string -> ?table_name:string -> unit -> update_kinesis_streaming_destination_output
val make_update_kinesis_streaming_destination_input : ?update_kinesis_streaming_configuration: update_kinesis_streaming_configuration -> stream_arn:string -> table_name:string -> unit -> update_kinesis_streaming_destination_input
val make_capacity : ?capacity_units:float -> ?write_capacity_units:float -> ?read_capacity_units:float -> unit -> capacity

Create a capacity type

val make_consumed_capacity : ?global_secondary_indexes:(string * capacity) list -> ?local_secondary_indexes:(string * capacity) list -> ?table:capacity -> ?write_capacity_units:float -> ?read_capacity_units:float -> ?capacity_units:float -> ?table_name:string -> unit -> consumed_capacity

Create a consumed_capacity type

val make_item_collection_metrics : ?size_estimate_range_g_b:float list -> ?item_collection_key:(string * attribute_value) list -> unit -> item_collection_metrics
val make_update_item_output : ?item_collection_metrics:item_collection_metrics -> ?consumed_capacity:consumed_capacity -> ?attributes:(string * attribute_value) list -> unit -> update_item_output

Create a update_item_output type

val make_attribute_value_update : ?action:attribute_action -> ?value:attribute_value -> unit -> attribute_value_update
val make_expected_attribute_value : ?attribute_value_list:attribute_value list -> ?comparison_operator:comparison_operator -> ?exists:bool -> ?value:attribute_value -> unit -> expected_attribute_value
val make_update_item_input : ?return_values_on_condition_check_failure: return_values_on_condition_check_failure -> ?expression_attribute_values:(string * attribute_value) list -> ?expression_attribute_names:(string * string) list -> ?condition_expression:string -> ?update_expression:string -> ?return_item_collection_metrics:return_item_collection_metrics -> ?return_consumed_capacity:return_consumed_capacity -> ?return_values:return_value -> ?conditional_operator:conditional_operator -> ?expected:(string * expected_attribute_value) list -> ?attribute_updates:(string * attribute_value_update) list -> key:(string * attribute_value) list -> table_name:string -> unit -> update_item_input

Create a update_item_input type

val make_replica_global_secondary_index_settings_description : ?provisioned_write_capacity_auto_scaling_settings: auto_scaling_settings_description -> ?provisioned_write_capacity_units:int -> ?provisioned_read_capacity_auto_scaling_settings: auto_scaling_settings_description -> ?provisioned_read_capacity_units:int -> ?index_status:index_status -> index_name:string -> unit -> replica_global_secondary_index_settings_description
val make_replica_settings_description : ?replica_table_class_summary:table_class_summary -> ?replica_global_secondary_index_settings: replica_global_secondary_index_settings_description list -> ?replica_provisioned_write_capacity_auto_scaling_settings: auto_scaling_settings_description -> ?replica_provisioned_write_capacity_units:int -> ?replica_provisioned_read_capacity_auto_scaling_settings: auto_scaling_settings_description -> ?replica_provisioned_read_capacity_units:int -> ?replica_billing_mode_summary:billing_mode_summary -> ?replica_status:replica_status -> region_name:string -> unit -> replica_settings_description
val make_update_global_table_settings_output : ?replica_settings:replica_settings_description list -> ?global_table_name:string -> unit -> update_global_table_settings_output
val make_global_table_global_secondary_index_settings_update : ?provisioned_write_capacity_auto_scaling_settings_update: auto_scaling_settings_update -> ?provisioned_write_capacity_units:int -> index_name:string -> unit -> global_table_global_secondary_index_settings_update
val make_replica_global_secondary_index_settings_update : ?provisioned_read_capacity_auto_scaling_settings_update: auto_scaling_settings_update -> ?provisioned_read_capacity_units:int -> index_name:string -> unit -> replica_global_secondary_index_settings_update
val make_replica_settings_update : ?replica_table_class:table_class -> ?replica_global_secondary_index_settings_update: replica_global_secondary_index_settings_update list -> ?replica_provisioned_read_capacity_auto_scaling_settings_update: auto_scaling_settings_update -> ?replica_provisioned_read_capacity_units:int -> region_name:string -> unit -> replica_settings_update
val make_update_global_table_settings_input : ?replica_settings_update:replica_settings_update list -> ?global_table_global_secondary_index_settings_update: global_table_global_secondary_index_settings_update list -> ?global_table_provisioned_write_capacity_auto_scaling_settings_update: auto_scaling_settings_update -> ?global_table_provisioned_write_capacity_units:int -> ?global_table_billing_mode:billing_mode -> global_table_name:string -> unit -> update_global_table_settings_input
val make_global_table_description : ?global_table_name:string -> ?global_table_status:global_table_status -> ?creation_date_time:float -> ?global_table_arn:string -> ?replication_group:replica_description list -> unit -> global_table_description
val make_update_global_table_output : ?global_table_description:global_table_description -> unit -> update_global_table_output
val make_create_replica_action : region_name:string -> unit -> create_replica_action

Create a create_replica_action type

val make_delete_replica_action : region_name:string -> unit -> delete_replica_action

Create a delete_replica_action type

val make_replica_update : ?delete:delete_replica_action -> ?create:create_replica_action -> unit -> replica_update

Create a replica_update type

val make_update_global_table_input : replica_updates:replica_update list -> global_table_name:string -> unit -> update_global_table_input
val make_update_contributor_insights_output : ?contributor_insights_status:contributor_insights_status -> ?index_name:string -> ?table_name:string -> unit -> update_contributor_insights_output
val make_update_contributor_insights_input : ?index_name:string -> contributor_insights_action:contributor_insights_action -> table_name:string -> unit -> update_contributor_insights_input
val make_point_in_time_recovery_description : ?latest_restorable_date_time:float -> ?earliest_restorable_date_time:float -> ?point_in_time_recovery_status:point_in_time_recovery_status -> unit -> point_in_time_recovery_description
val make_continuous_backups_description : ?point_in_time_recovery_description:point_in_time_recovery_description -> continuous_backups_status:continuous_backups_status -> unit -> continuous_backups_description
val make_update_continuous_backups_output : ?continuous_backups_description:continuous_backups_description -> unit -> update_continuous_backups_output
val make_point_in_time_recovery_specification : point_in_time_recovery_enabled:bool -> unit -> point_in_time_recovery_specification
val make_update_continuous_backups_input : point_in_time_recovery_specification:point_in_time_recovery_specification -> table_name:string -> unit -> update_continuous_backups_input
val make_update : ?return_values_on_condition_check_failure: return_values_on_condition_check_failure -> ?expression_attribute_values:(string * attribute_value) list -> ?expression_attribute_names:(string * string) list -> ?condition_expression:string -> table_name:string -> update_expression:string -> key:(string * attribute_value) list -> unit -> update

Create a update type

val make_untag_resource_input : tag_keys:string list -> resource_arn:string -> unit -> untag_resource_input

Create a untag_resource_input type

val make_cancellation_reason : ?message:string -> ?code:string -> ?item:(string * attribute_value) list -> unit -> cancellation_reason

Create a cancellation_reason type

val make_transact_write_items_output : ?item_collection_metrics:(string * item_collection_metrics list) list -> ?consumed_capacity:consumed_capacity list -> unit -> transact_write_items_output
val make_condition_check : ?return_values_on_condition_check_failure: return_values_on_condition_check_failure -> ?expression_attribute_values:(string * attribute_value) list -> ?expression_attribute_names:(string * string) list -> condition_expression:string -> table_name:string -> key:(string * attribute_value) list -> unit -> condition_check

Create a condition_check type

val make_put : ?return_values_on_condition_check_failure: return_values_on_condition_check_failure -> ?expression_attribute_values:(string * attribute_value) list -> ?expression_attribute_names:(string * string) list -> ?condition_expression:string -> table_name:string -> item:(string * attribute_value) list -> unit -> put

Create a put type

val make_delete : ?return_values_on_condition_check_failure: return_values_on_condition_check_failure -> ?expression_attribute_values:(string * attribute_value) list -> ?expression_attribute_names:(string * string) list -> ?condition_expression:string -> table_name:string -> key:(string * attribute_value) list -> unit -> delete

Create a delete type

val make_transact_write_item : ?update:update -> ?delete:delete -> ?put:put -> ?condition_check:condition_check -> unit -> transact_write_item

Create a transact_write_item type

val make_transact_write_items_input : ?client_request_token:string -> ?return_item_collection_metrics:return_item_collection_metrics -> ?return_consumed_capacity:return_consumed_capacity -> transact_items:transact_write_item list -> unit -> transact_write_items_input
val make_item_response : ?item:(string * attribute_value) list -> unit -> item_response

Create a item_response type

val make_transact_get_items_output : ?responses:item_response list -> ?consumed_capacity:consumed_capacity list -> unit -> transact_get_items_output
val make_get : ?expression_attribute_names:(string * string) list -> ?projection_expression:string -> table_name:string -> key:(string * attribute_value) list -> unit -> get

Create a get type

val make_transact_get_item : get:get -> unit -> transact_get_item

Create a transact_get_item type

val make_transact_get_items_input : ?return_consumed_capacity:return_consumed_capacity -> transact_items:transact_get_item list -> unit -> transact_get_items_input
val make_time_to_live_description : ?attribute_name:string -> ?time_to_live_status:time_to_live_status -> unit -> time_to_live_description
val make_tag : value:string -> key:string -> unit -> tag

Create a tag type

val make_tag_resource_input : tags:tag list -> resource_arn:string -> unit -> tag_resource_input

Create a tag_resource_input type

val make_global_secondary_index : ?on_demand_throughput:on_demand_throughput -> ?provisioned_throughput:provisioned_throughput -> projection:projection -> key_schema:key_schema_element list -> index_name:string -> unit -> global_secondary_index
val make_table_creation_parameters : ?global_secondary_indexes:global_secondary_index list -> ?sse_specification:sse_specification -> ?on_demand_throughput:on_demand_throughput -> ?provisioned_throughput:provisioned_throughput -> ?billing_mode:billing_mode -> key_schema:key_schema_element list -> attribute_definitions:attribute_definition list -> table_name:string -> unit -> table_creation_parameters
val make_local_secondary_index_info : ?projection:projection -> ?key_schema:key_schema_element list -> ?index_name:string -> unit -> local_secondary_index_info
val make_global_secondary_index_info : ?on_demand_throughput:on_demand_throughput -> ?provisioned_throughput:provisioned_throughput -> ?projection:projection -> ?key_schema:key_schema_element list -> ?index_name:string -> unit -> global_secondary_index_info
val make_source_table_feature_details : ?sse_description:sse_description -> ?time_to_live_description:time_to_live_description -> ?stream_description:stream_specification -> ?global_secondary_indexes:global_secondary_index_info list -> ?local_secondary_indexes:local_secondary_index_info list -> unit -> source_table_feature_details
val make_source_table_details : ?billing_mode:billing_mode -> ?item_count:int -> ?on_demand_throughput:on_demand_throughput -> ?table_size_bytes:int -> ?table_arn:string -> provisioned_throughput:provisioned_throughput -> table_creation_date_time:float -> key_schema:key_schema_element list -> table_id:string -> table_name:string -> unit -> source_table_details

Create a source_table_details type

val make_scan_output : ?consumed_capacity:consumed_capacity -> ?last_evaluated_key:(string * attribute_value) list -> ?scanned_count:int -> ?count:int -> ?items:(string * attribute_value) list list -> unit -> scan_output

Create a scan_output type

val make_condition : ?attribute_value_list:attribute_value list -> comparison_operator:comparison_operator -> unit -> condition

Create a condition type

val make_scan_input : ?consistent_read:bool -> ?expression_attribute_values:(string * attribute_value) list -> ?expression_attribute_names:(string * string) list -> ?filter_expression:string -> ?projection_expression:string -> ?segment:int -> ?total_segments:int -> ?return_consumed_capacity:return_consumed_capacity -> ?exclusive_start_key:(string * attribute_value) list -> ?conditional_operator:conditional_operator -> ?scan_filter:(string * condition) list -> ?select:select -> ?limit:int -> ?attributes_to_get:string list -> ?index_name:string -> table_name:string -> unit -> scan_input

Create a scan_input type

val make_s3_bucket_source : ?s3_key_prefix:string -> ?s3_bucket_owner:string -> s3_bucket:string -> unit -> s3_bucket_source

Create a s3_bucket_source type

val make_restore_table_to_point_in_time_output : ?table_description:table_description -> unit -> restore_table_to_point_in_time_output
val make_local_secondary_index : projection:projection -> key_schema:key_schema_element list -> index_name:string -> unit -> local_secondary_index

Create a local_secondary_index type

val make_restore_table_to_point_in_time_input : ?sse_specification_override:sse_specification -> ?on_demand_throughput_override:on_demand_throughput -> ?provisioned_throughput_override:provisioned_throughput -> ?local_secondary_index_override:local_secondary_index list -> ?global_secondary_index_override:global_secondary_index list -> ?billing_mode_override:billing_mode -> ?restore_date_time:float -> ?use_latest_restorable_time:bool -> ?source_table_name:string -> ?source_table_arn:string -> target_table_name:string -> unit -> restore_table_to_point_in_time_input
val make_restore_table_from_backup_output : ?table_description:table_description -> unit -> restore_table_from_backup_output
val make_restore_table_from_backup_input : ?sse_specification_override:sse_specification -> ?on_demand_throughput_override:on_demand_throughput -> ?provisioned_throughput_override:provisioned_throughput -> ?local_secondary_index_override:local_secondary_index list -> ?global_secondary_index_override:global_secondary_index list -> ?billing_mode_override:billing_mode -> backup_arn:string -> target_table_name:string -> unit -> restore_table_from_backup_input
val make_replica : ?region_name:string -> unit -> replica

Create a replica type

val make_query_output : ?consumed_capacity:consumed_capacity -> ?last_evaluated_key:(string * attribute_value) list -> ?scanned_count:int -> ?count:int -> ?items:(string * attribute_value) list list -> unit -> query_output

Create a query_output type

val make_query_input : ?expression_attribute_values:(string * attribute_value) list -> ?expression_attribute_names:(string * string) list -> ?key_condition_expression:string -> ?filter_expression:string -> ?projection_expression:string -> ?return_consumed_capacity:return_consumed_capacity -> ?exclusive_start_key:(string * attribute_value) list -> ?scan_index_forward:bool -> ?conditional_operator:conditional_operator -> ?query_filter:(string * condition) list -> ?key_conditions:(string * condition) list -> ?consistent_read:bool -> ?limit:int -> ?attributes_to_get:string list -> ?select:select -> ?index_name:string -> table_name:string -> unit -> query_input

Create a query_input type

val make_put_resource_policy_output : ?revision_id:string -> unit -> put_resource_policy_output
val make_put_resource_policy_input : ?confirm_remove_self_resource_access:bool -> ?expected_revision_id:string -> policy:string -> resource_arn:string -> unit -> put_resource_policy_input
val make_put_item_output : ?item_collection_metrics:item_collection_metrics -> ?consumed_capacity:consumed_capacity -> ?attributes:(string * attribute_value) list -> unit -> put_item_output

Create a put_item_output type

val make_put_item_input : ?return_values_on_condition_check_failure: return_values_on_condition_check_failure -> ?expression_attribute_values:(string * attribute_value) list -> ?expression_attribute_names:(string * string) list -> ?condition_expression:string -> ?conditional_operator:conditional_operator -> ?return_item_collection_metrics:return_item_collection_metrics -> ?return_consumed_capacity:return_consumed_capacity -> ?return_values:return_value -> ?expected:(string * expected_attribute_value) list -> item:(string * attribute_value) list -> table_name:string -> unit -> put_item_input

Create a put_item_input type

val make_batch_statement_error : ?item:(string * attribute_value) list -> ?message:string -> ?code:batch_statement_error_code_enum -> unit -> batch_statement_error

Create a batch_statement_error type

val make_batch_statement_response : ?item:(string * attribute_value) list -> ?table_name:string -> ?error:batch_statement_error -> unit -> batch_statement_response
val make_batch_statement_request : ?return_values_on_condition_check_failure: return_values_on_condition_check_failure -> ?consistent_read:bool -> ?parameters:attribute_value list -> statement:string -> unit -> batch_statement_request
val make_parameterized_statement : ?return_values_on_condition_check_failure: return_values_on_condition_check_failure -> ?parameters:attribute_value list -> statement:string -> unit -> parameterized_statement
val make_list_tags_of_resource_output : ?next_token:string -> ?tags:tag list -> unit -> list_tags_of_resource_output
val make_list_tags_of_resource_input : ?next_token:string -> resource_arn:string -> unit -> list_tags_of_resource_input
val make_list_tables_output : ?last_evaluated_table_name:string -> ?table_names:string list -> unit -> list_tables_output

Create a list_tables_output type

val make_list_tables_input : ?limit:int -> ?exclusive_start_table_name:string -> unit -> list_tables_input

Create a list_tables_input type

val make_import_summary : ?end_time:float -> ?start_time:float -> ?input_format:input_format -> ?cloud_watch_log_group_arn:string -> ?s3_bucket_source:s3_bucket_source -> ?table_arn:string -> ?import_status:import_status -> ?import_arn:string -> unit -> import_summary

Create a import_summary type

val make_list_imports_output : ?next_token:string -> ?import_summary_list:import_summary list -> unit -> list_imports_output

Create a list_imports_output type

val make_list_imports_input : ?next_token:string -> ?page_size:int -> ?table_arn:string -> unit -> list_imports_input

Create a list_imports_input type

val make_global_table : ?replication_group:replica list -> ?global_table_name:string -> unit -> global_table

Create a global_table type

val make_list_global_tables_output : ?last_evaluated_global_table_name:string -> ?global_tables:global_table list -> unit -> list_global_tables_output
val make_list_global_tables_input : ?region_name:string -> ?limit:int -> ?exclusive_start_global_table_name:string -> unit -> list_global_tables_input
val make_export_summary : ?export_type:export_type -> ?export_status:export_status -> ?export_arn:string -> unit -> export_summary

Create a export_summary type

val make_list_exports_output : ?next_token:string -> ?export_summaries:export_summary list -> unit -> list_exports_output

Create a list_exports_output type

val make_list_exports_input : ?next_token:string -> ?max_results:int -> ?table_arn:string -> unit -> list_exports_input

Create a list_exports_input type

val make_contributor_insights_summary : ?contributor_insights_status:contributor_insights_status -> ?index_name:string -> ?table_name:string -> unit -> contributor_insights_summary
val make_list_contributor_insights_output : ?next_token:string -> ?contributor_insights_summaries:contributor_insights_summary list -> unit -> list_contributor_insights_output
val make_list_contributor_insights_input : ?max_results:int -> ?next_token:string -> ?table_name:string -> unit -> list_contributor_insights_input
val make_backup_summary : ?backup_size_bytes:int -> ?backup_type:backup_type -> ?backup_status:backup_status -> ?backup_expiry_date_time:float -> ?backup_creation_date_time:float -> ?backup_name:string -> ?backup_arn:string -> ?table_arn:string -> ?table_id:string -> ?table_name:string -> unit -> backup_summary

Create a backup_summary type

val make_list_backups_output : ?last_evaluated_backup_arn:string -> ?backup_summaries:backup_summary list -> unit -> list_backups_output

Create a list_backups_output type

val make_list_backups_input : ?backup_type:backup_type_filter -> ?exclusive_start_backup_arn:string -> ?time_range_upper_bound:float -> ?time_range_lower_bound:float -> ?limit:int -> ?table_name:string -> unit -> list_backups_input

Create a list_backups_input type

val make_enable_kinesis_streaming_configuration : ?approximate_creation_date_time_precision: approximate_creation_date_time_precision -> unit -> enable_kinesis_streaming_configuration
val make_kinesis_streaming_destination_output : ?enable_kinesis_streaming_configuration: enable_kinesis_streaming_configuration -> ?destination_status:destination_status -> ?stream_arn:string -> ?table_name:string -> unit -> kinesis_streaming_destination_output
val make_kinesis_streaming_destination_input : ?enable_kinesis_streaming_configuration: enable_kinesis_streaming_configuration -> stream_arn:string -> table_name:string -> unit -> kinesis_streaming_destination_input
val make_kinesis_data_stream_destination : ?approximate_creation_date_time_precision: approximate_creation_date_time_precision -> ?destination_status_description:string -> ?destination_status:destination_status -> ?stream_arn:string -> unit -> kinesis_data_stream_destination
val make_keys_and_attributes : ?expression_attribute_names:(string * string) list -> ?projection_expression:string -> ?consistent_read:bool -> ?attributes_to_get:string list -> keys:(string * attribute_value) list list -> unit -> keys_and_attributes

Create a keys_and_attributes type

val make_csv_options : ?header_list:string list -> ?delimiter:string -> unit -> csv_options

Create a csv_options type

val make_input_format_options : ?csv:csv_options -> unit -> input_format_options

Create a input_format_options type

val make_incremental_export_specification : ?export_view_type:export_view_type -> ?export_to_time:float -> ?export_from_time:float -> unit -> incremental_export_specification
val make_import_table_description : ?failure_message:string -> ?failure_code:string -> ?imported_item_count:int -> ?processed_item_count:int -> ?processed_size_bytes:int -> ?end_time:float -> ?start_time:float -> ?table_creation_parameters:table_creation_parameters -> ?input_compression_type:input_compression_type -> ?input_format_options:input_format_options -> ?input_format:input_format -> ?cloud_watch_log_group_arn:string -> ?error_count:int -> ?s3_bucket_source:s3_bucket_source -> ?client_token:string -> ?table_id:string -> ?table_arn:string -> ?import_status:import_status -> ?import_arn:string -> unit -> import_table_description
val make_import_table_output : import_table_description:import_table_description -> unit -> import_table_output

Create a import_table_output type

val make_import_table_input : ?input_compression_type:input_compression_type -> ?input_format_options:input_format_options -> ?client_token:string -> table_creation_parameters:table_creation_parameters -> input_format:input_format -> s3_bucket_source:s3_bucket_source -> unit -> import_table_input

Create a import_table_input type

val make_get_resource_policy_output : ?revision_id:string -> ?policy:string -> unit -> get_resource_policy_output
val make_get_resource_policy_input : resource_arn:string -> unit -> get_resource_policy_input
val make_get_item_output : ?consumed_capacity:consumed_capacity -> ?item:(string * attribute_value) list -> unit -> get_item_output

Create a get_item_output type

val make_get_item_input : ?expression_attribute_names:(string * string) list -> ?projection_expression:string -> ?return_consumed_capacity:return_consumed_capacity -> ?consistent_read:bool -> ?attributes_to_get:string list -> key:(string * attribute_value) list -> table_name:string -> unit -> get_item_input

Create a get_item_input type

val make_failure_exception : ?exception_description:string -> ?exception_name:string -> unit -> failure_exception

Create a failure_exception type

val make_export_description : ?incremental_export_specification:incremental_export_specification -> ?export_type:export_type -> ?item_count:int -> ?billed_size_bytes:int -> ?export_format:export_format -> ?failure_message:string -> ?failure_code:string -> ?s3_sse_kms_key_id:string -> ?s3_sse_algorithm:s3_sse_algorithm -> ?s3_prefix:string -> ?s3_bucket_owner:string -> ?s3_bucket:string -> ?client_token:string -> ?export_time:float -> ?table_id:string -> ?table_arn:string -> ?export_manifest:string -> ?end_time:float -> ?start_time:float -> ?export_status:export_status -> ?export_arn:string -> unit -> export_description

Create a export_description type

val make_export_table_to_point_in_time_output : ?export_description:export_description -> unit -> export_table_to_point_in_time_output
val make_export_table_to_point_in_time_input : ?incremental_export_specification:incremental_export_specification -> ?export_type:export_type -> ?export_format:export_format -> ?s3_sse_kms_key_id:string -> ?s3_sse_algorithm:s3_sse_algorithm -> ?s3_prefix:string -> ?s3_bucket_owner:string -> ?client_token:string -> ?export_time:float -> s3_bucket:string -> table_arn:string -> unit -> export_table_to_point_in_time_input
val make_execute_transaction_output : ?consumed_capacity:consumed_capacity list -> ?responses:item_response list -> unit -> execute_transaction_output
val make_execute_transaction_input : ?return_consumed_capacity:return_consumed_capacity -> ?client_request_token:string -> transact_statements:parameterized_statement list -> unit -> execute_transaction_input
val make_execute_statement_output : ?last_evaluated_key:(string * attribute_value) list -> ?consumed_capacity:consumed_capacity -> ?next_token:string -> ?items:(string * attribute_value) list list -> unit -> execute_statement_output
val make_execute_statement_input : ?return_values_on_condition_check_failure: return_values_on_condition_check_failure -> ?limit:int -> ?return_consumed_capacity:return_consumed_capacity -> ?next_token:string -> ?consistent_read:bool -> ?parameters:attribute_value list -> statement:string -> unit -> execute_statement_input
val make_endpoint : cache_period_in_minutes:int -> address:string -> unit -> endpoint

Create a endpoint type

val make_describe_time_to_live_output : ?time_to_live_description:time_to_live_description -> unit -> describe_time_to_live_output
val make_describe_time_to_live_input : table_name:string -> unit -> describe_time_to_live_input
val make_describe_table_replica_auto_scaling_output : ?table_auto_scaling_description:table_auto_scaling_description -> unit -> describe_table_replica_auto_scaling_output
val make_describe_table_replica_auto_scaling_input : table_name:string -> unit -> describe_table_replica_auto_scaling_input
val make_describe_table_output : ?table:table_description -> unit -> describe_table_output

Create a describe_table_output type

val make_describe_table_input : table_name:string -> unit -> describe_table_input

Create a describe_table_input type

val make_describe_limits_output : ?table_max_write_capacity_units:int -> ?table_max_read_capacity_units:int -> ?account_max_write_capacity_units:int -> ?account_max_read_capacity_units:int -> unit -> describe_limits_output
val make_describe_limits_input : unit -> describe_limits_input

Create a describe_limits_input type

val make_describe_kinesis_streaming_destination_output : ?kinesis_data_stream_destinations:kinesis_data_stream_destination list -> ?table_name:string -> unit -> describe_kinesis_streaming_destination_output
val make_describe_kinesis_streaming_destination_input : table_name:string -> unit -> describe_kinesis_streaming_destination_input
val make_describe_import_output : import_table_description:import_table_description -> unit -> describe_import_output
val make_describe_import_input : import_arn:string -> unit -> describe_import_input

Create a describe_import_input type

val make_describe_global_table_settings_output : ?replica_settings:replica_settings_description list -> ?global_table_name:string -> unit -> describe_global_table_settings_output
val make_describe_global_table_settings_input : global_table_name:string -> unit -> describe_global_table_settings_input
val make_describe_global_table_output : ?global_table_description:global_table_description -> unit -> describe_global_table_output
val make_describe_global_table_input : global_table_name:string -> unit -> describe_global_table_input
val make_describe_export_output : ?export_description:export_description -> unit -> describe_export_output
val make_describe_export_input : export_arn:string -> unit -> describe_export_input

Create a describe_export_input type

val make_describe_endpoints_response : endpoints:endpoint list -> unit -> describe_endpoints_response
val make_describe_endpoints_request : unit -> describe_endpoints_request
val make_describe_contributor_insights_output : ?failure_exception:failure_exception -> ?last_update_date_time:float -> ?contributor_insights_status:contributor_insights_status -> ?contributor_insights_rule_list:string list -> ?index_name:string -> ?table_name:string -> unit -> describe_contributor_insights_output
val make_describe_contributor_insights_input : ?index_name:string -> table_name:string -> unit -> describe_contributor_insights_input
val make_describe_continuous_backups_output : ?continuous_backups_description:continuous_backups_description -> unit -> describe_continuous_backups_output
val make_describe_continuous_backups_input : table_name:string -> unit -> describe_continuous_backups_input
val make_backup_details : ?backup_expiry_date_time:float -> ?backup_size_bytes:int -> backup_creation_date_time:float -> backup_type:backup_type -> backup_status:backup_status -> backup_name:string -> backup_arn:string -> unit -> backup_details

Create a backup_details type

val make_backup_description : ?source_table_feature_details:source_table_feature_details -> ?source_table_details:source_table_details -> ?backup_details:backup_details -> unit -> backup_description

Create a backup_description type

val make_describe_backup_output : ?backup_description:backup_description -> unit -> describe_backup_output
val make_describe_backup_input : backup_arn:string -> unit -> describe_backup_input

Create a describe_backup_input type

val make_delete_table_output : ?table_description:table_description -> unit -> delete_table_output

Create a delete_table_output type

val make_delete_table_input : table_name:string -> unit -> delete_table_input

Create a delete_table_input type

val make_delete_resource_policy_output : ?revision_id:string -> unit -> delete_resource_policy_output
val make_delete_resource_policy_input : ?expected_revision_id:string -> resource_arn:string -> unit -> delete_resource_policy_input
val make_delete_item_output : ?item_collection_metrics:item_collection_metrics -> ?consumed_capacity:consumed_capacity -> ?attributes:(string * attribute_value) list -> unit -> delete_item_output

Create a delete_item_output type

val make_delete_item_input : ?return_values_on_condition_check_failure: return_values_on_condition_check_failure -> ?expression_attribute_values:(string * attribute_value) list -> ?expression_attribute_names:(string * string) list -> ?condition_expression:string -> ?return_item_collection_metrics:return_item_collection_metrics -> ?return_consumed_capacity:return_consumed_capacity -> ?return_values:return_value -> ?conditional_operator:conditional_operator -> ?expected:(string * expected_attribute_value) list -> key:(string * attribute_value) list -> table_name:string -> unit -> delete_item_input

Create a delete_item_input type

val make_delete_backup_output : ?backup_description:backup_description -> unit -> delete_backup_output

Create a delete_backup_output type

val make_delete_backup_input : backup_arn:string -> unit -> delete_backup_input

Create a delete_backup_input type

val make_create_table_output : ?table_description:table_description -> unit -> create_table_output

Create a create_table_output type

val make_create_table_input : ?on_demand_throughput:on_demand_throughput -> ?resource_policy:string -> ?deletion_protection_enabled:bool -> ?table_class:table_class -> ?tags:tag list -> ?sse_specification:sse_specification -> ?stream_specification:stream_specification -> ?provisioned_throughput:provisioned_throughput -> ?billing_mode:billing_mode -> ?global_secondary_indexes:global_secondary_index list -> ?local_secondary_indexes:local_secondary_index list -> key_schema:key_schema_element list -> table_name:string -> attribute_definitions:attribute_definition list -> unit -> create_table_input

Create a create_table_input type

val make_create_global_table_output : ?global_table_description:global_table_description -> unit -> create_global_table_output
val make_create_global_table_input : replication_group:replica list -> global_table_name:string -> unit -> create_global_table_input
val make_create_backup_output : ?backup_details:backup_details -> unit -> create_backup_output

Create a create_backup_output type

val make_create_backup_input : backup_name:string -> table_name:string -> unit -> create_backup_input

Create a create_backup_input type

val make_batch_write_item_output : ?consumed_capacity:consumed_capacity list -> ?item_collection_metrics:(string * item_collection_metrics list) list -> ?unprocessed_items:(string * write_request list) list -> unit -> batch_write_item_output
val make_batch_write_item_input : ?return_item_collection_metrics:return_item_collection_metrics -> ?return_consumed_capacity:return_consumed_capacity -> request_items:(string * write_request list) list -> unit -> batch_write_item_input
val make_batch_get_item_output : ?consumed_capacity:consumed_capacity list -> ?unprocessed_keys:(string * keys_and_attributes) list -> ?responses:(string * (string * attribute_value) list list) list -> unit -> batch_get_item_output

Create a batch_get_item_output type

val make_batch_get_item_input : ?return_consumed_capacity:return_consumed_capacity -> request_items:(string * keys_and_attributes) list -> unit -> batch_get_item_input

Create a batch_get_item_input type

val make_batch_execute_statement_output : ?consumed_capacity:consumed_capacity list -> ?responses:batch_statement_response list -> unit -> batch_execute_statement_output
val make_batch_execute_statement_input : ?return_consumed_capacity:return_consumed_capacity -> statements:batch_statement_request list -> unit -> batch_execute_statement_input

Operations

module BatchExecuteStatement : sig ... end
module BatchGetItem : sig ... end
module BatchWriteItem : sig ... end
module CreateBackup : sig ... end
module CreateGlobalTable : sig ... end
module CreateTable : sig ... end
module DeleteBackup : sig ... end
module DeleteItem : sig ... end
module DeleteResourcePolicy : sig ... end
module DeleteTable : sig ... end
module DescribeBackup : sig ... end
module DescribeContinuousBackups : sig ... end
module DescribeContributorInsights : sig ... end
module DescribeEndpoints : sig ... end
module DescribeExport : sig ... end
module DescribeGlobalTable : sig ... end
module DescribeGlobalTableSettings : sig ... end
module DescribeImport : sig ... end
module DescribeLimits : sig ... end
module DescribeTable : sig ... end
module DescribeTableReplicaAutoScaling : sig ... end
module DescribeTimeToLive : sig ... end
module ExecuteStatement : sig ... end
module ExecuteTransaction : sig ... end
module ExportTableToPointInTime : sig ... end
module GetItem : sig ... end
module GetResourcePolicy : sig ... end
module ImportTable : sig ... end
module ListBackups : sig ... end
module ListContributorInsights : sig ... end
module ListExports : sig ... end
module ListGlobalTables : sig ... end
module ListImports : sig ... end
module ListTables : sig ... end
module ListTagsOfResource : sig ... end
module PutItem : sig ... end
module PutResourcePolicy : sig ... end
module Query : sig ... end
module RestoreTableFromBackup : sig ... end
module RestoreTableToPointInTime : sig ... end
module Scan : sig ... end
module TagResource : sig ... end
module TransactGetItems : sig ... end
module TransactWriteItems : sig ... end
module UntagResource : sig ... end
module UpdateContinuousBackups : sig ... end
module UpdateContributorInsights : sig ... end
module UpdateGlobalTable : sig ... end
module UpdateGlobalTableSettings : sig ... end
module UpdateItem : sig ... end
module UpdateTable : sig ... end
module UpdateTableReplicaAutoScaling : sig ... end
module UpdateTimeToLive : sig ... end