Reference
Edit Schedule In Datasource - This is marked as Uncomplete
TODO: UNFINISHED DOCUMENTATION
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
The resourceSlug is a url parameter of the teamId associated with the user. Anywhere the resourceSlug is used can be interpreted as a teamId
Datasource object that was created when testDatasourceApi is run, must include datasourceId.
Unique identifier for the datasource.
Identifier of the organization to which the datasource belongs.
Identifier of the team to which the datasource belongs.
The name of the datasource.
Optional description of the datasource.
The original name of the datasource.
The name of the file associated with the datasource, if applicable.
The type of source for the datasource.
The identifier of the data source.
The identifier of the data destination.
The identifier of the workspace associated with the datasource.
The identifier of the connection associated with the datasource.
The record count details for the datasource, including total, successful, and failed records.
The total number of records processed.
The number of successfully processed records.
The number of records that failed to process.
Configuration settings for the datasource connection.
Optional prefix to be added to the destination's namespace. Can be null.
The name of the datasource connection.
The identifier of the data source.
The identifier of the data destination.
The status of the datasource connection. This should match the enum values defined by the Airbyte API and should allow creation in a paused state.
Configuration settings for the datasource connection. Structure is dependent on the datasource type.
Scheduling information for the datasource connection.
The type of schedule for the datasource connection.
The CRON expression for scheduling, required if the schedule type is 'cron'.
Specifies where the data should be stored geographically.
Defines how the namespace should be determined for the data.
The format of the namespace, can be null if not applicable.
Specifies the behavior for handling non-breaking schema updates.
The date and time when the datasource was created.
The date and time when the datasource was last synced. Null indicates it has never been synced.
The current status of the datasource.
Schema discovered during the data source connection. The structure depends on the source type.
Configuration settings for chunking unstructured data, including partitioning and chunking strategies, character limits, and similarity thresholds.
The partitioning strategy used for unstructured data.
The chunking strategy used for unstructured data.
The maximum number of characters allowed per chunk.
The number of characters after which a new chunk is created.
The number of characters to overlap between chunks.
Threshold for similarity when chunking by similarity, with a value between 0.0 and 1.0.
Indicates whether to apply overlap to all chunks or only between adjacent chunks.
The field used for embedding within the datasource.
The field used to apply time weighting within the datasource.
Identifier of the embedding model used, if applicable.
Indicates whether the datasource is hidden from standard views.
Configuration settings for processing streams of data, breaking them into smaller chunks for more manageable processing.
A temporary field to limit CRON frequency based on the plan. This will be replaced with a more robust solution in the future.
Path Parameters
The resourceSlug is a url parameter of the teamId associated with the user. Anywhere the resourceSlug is used can be interpreted as a teamId
Query Parameters
Datasource object that was created when testDatasourceApi is run, must include datasourceId.
The name of the datasource.
The original name of the datasource.
The type of source for the datasource.
The identifier of the data source.
The identifier of the data destination.
The identifier of the workspace associated with the datasource.
The identifier of the connection associated with the datasource.
The date and time when the datasource was created.
Unique identifier for the datasource.
Identifier of the organization to which the datasource belongs.
Identifier of the team to which the datasource belongs.
Optional description of the datasource.
The name of the file associated with the datasource, if applicable.
The record count details for the datasource, including total, successful, and failed records.
Configuration settings for the datasource connection.
Optional prefix to be added to the destination's namespace. Can be null.
The name of the datasource connection.
The identifier of the data source.
The identifier of the data destination.
The status of the datasource connection. This should match the enum values defined by the Airbyte API and should allow creation in a paused state.
Configuration settings for the datasource connection. Structure is dependent on the datasource type.
Specifies the behavior for handling non-breaking schema updates.
Scheduling information for the datasource connection.
Specifies where the data should be stored geographically.
Defines how the namespace should be determined for the data.
The format of the namespace, can be null if not applicable.
The date and time when the datasource was last synced. Null indicates it has never been synced.
The current status of the datasource.
draft
, processing
, embedding
, ready
Schema discovered during the data source connection. The structure depends on the source type.
Configuration settings for chunking unstructured data, including partitioning and chunking strategies, character limits, and similarity thresholds.
The partitioning strategy used for unstructured data.
auto
, fast
, hi_res
, ocr_only
The chunking strategy used for unstructured data.
basic
, by_title
, by_page
, by_similarity
The maximum number of characters allowed per chunk.
The number of characters after which a new chunk is created.
The number of characters to overlap between chunks.
Threshold for similarity when chunking by similarity, with a value between 0.0 and 1.0.
0 < x < 1
Indicates whether to apply overlap to all chunks or only between adjacent chunks.
The field used for embedding within the datasource.
The field used to apply time weighting within the datasource.
Identifier of the embedding model used, if applicable.
Indicates whether the datasource is hidden from standard views.
Configuration settings for processing streams of data, breaking them into smaller chunks for more manageable processing.
Configuration settings for a specific stream, used to break down large volumes of data into smaller, manageable chunks for processing.
List of child stream identifiers that are checked for inclusion in the sync.
List of fields that make up the primary key for the stream.
The synchronization mode used for the stream.
List of fields that act as the cursor for incremental syncs.
A map of field names to their descriptions.
A temporary field to limit CRON frequency based on the plan. This will be replaced with a more robust solution in the future.