Cognite API (v1)

Download OpenAPI specification:Download

Introduction

This is the reference documentation for the Cognite API with an overview of all the available methods.

Postman

You can download our postman collection here. Open Postman, click Import -> Import From Link, insert the link and import.

You can read more about how to use Postman here

Pagination

Most resource types can be paginated, indicated by the field nextCursor in the response. By passing the value of nextCursor as the cursor you will get the next page of limit results. Note that all parameters except cursor has to stay the same.

Parallel retrieval

If you want to download a lot of resources (let's say events), paginating through millions of records can be slow. We support parallel retrieval through the partition parameter, which has the format m/n where n is the amount of partitions you would like to split the entire data set into. If you want to download the entire data set by splitting it into 10 partitions, do the following in parallel with m running from 1 to 10:

  • Make a request to /events with partition=m/10.
  • Paginate through the response by following the cursor as explained above. Note that the partition parameter needs to be passed to all subqueries. Processing of parallel retrieval requests is subject to concurrency quota availability. The request returns the 429 response upon exceeding concurrency limits. See the Request throttling chapter below.

To prevent unexpected problems and maximize read throughput, you should at most use 10 partitions. Some CDF resources will automatically enforce a maximum of 10 partitions. For more specific and detailed information, please read the partition attribute documentation for the CDF resource you're using.

Requests throttling

Cognite Data Fusion (CDF) returns the HTTP 429 (too many requests) response status code when project capacity exceeds the limit.

The throttling can happen:

  • If a user or a project sends too many (more than allocated) concurrent requests.
  • If a user or a project sends a too high (more than allocated) rate of requests in a given amount of time.

Cognite recommends using a retry strategy based on truncated exponential backoff to handle sessions with HTTP response codes 429.

Cognite recommends using a reasonable number (up to 10) of Parallel retrieval partitions.

Following these strategies lets you slow down the request frequency to maximize productivity without having to re-submit/retry failing requests.

See more here.

API versions

Version headers

This API uses calendar versioning, and version names follow the YYYYMMDD format. You can find the versions currently available by using the version selector at the top of this page.

To use a specific API version, you can pass the cdf-version: $version header along with your requests to the API.

Beta versions

The beta versions provide a preview of what the stable version will look like in the future. Beta versions contain functionality that is reasonably mature, and highly likely to become a part of the stable API.

Beta versions are indicated by a -beta suffix after the version name. For example, the beta version header for the 2023-01-01 version is then cdf-version: 20230101-beta.

Alpha versions

Alpha versions contain functionality that is new and experimental, and not guaranteed to ever become a part of the stable API. This functionality presents no guarantee of service, so its use is subject to caution.

Alpha versions are indicated by an -alpha suffix after the version name. For example, the alpha version header for the 2023-01-01 version is then cdf-version: 20230101-alpha.

Changelog

This article documents all notable changes to the Cognite Data Fusion (CDF) API v1.

2023-08-22

Time series

Added

  • Data point subscriptions (Beta)
    • Use the new Data point subscriptions feature to configure a subscription to listen to changes in one or more time series (in ingestion order). The feature is intended to be used where data points consumers need to keep up to date with changes to one or more time series without the need to read the entire time series again. (Beta)

2023-08-10

Time series

Added

  • Advanced query language support reaches General Availability (GA).

Sequences

Added

  • Advanced query language support reaches General Availability (GA).

2023-08-08

Assets

Added

  • Advanced query language support reaches General Availability (GA).
    • Advanced search, filtering, and sorting capabilities in the Filter assets endpoint.
    • Advanced aggregation capabilities in the Aggregate assets endpoint.

Events

Added

  • Advanced query language support reaches General Availability (GA).
    • Advanced search, filtering, and sorting capabilities in the Filter events endpoint.
    • Advanced aggregation capabilities in the Aggregate events endpoint.

Documents

Added

  • Advanced query language support reaches General Availability (GA).

2023-06-27

IAM (Identity and access management)

Changed

  • Identity providers (IdP) are required to be compatible with the OpenID Connect Discovery 1.0 standard, and compliance will now be enforced by the Projects API.

    • The oidcConfiguration.jwksUrl and oidcConfiguration.tokenUrl can be entirely omitted when updating the OIDC configuration for a project.
    • The oidcConfiguration.jwksUrl and oidcConfiguration.tokenUrl are preserved for backwards compatibility of the API. However, if these are specified as part of the request body, the value must match excatly the values that are specified in the OpenID provider configuration document for the configured issuer (can be found at https://{issuer-url}/.well-known/openid-configuration). If the values does not match, the API will return an error message.
  • The oidcConfiguration.skewMs has been deprecated but remains part of the API for backwards compatibility. It can be omitted from the request. If included, it must always be set to 0.

  • The oidcConfiguration.isGroupCallbackEnabled has been deprecated but remains part of the API for backwards compatibility. It can be omitted from the request.

    • For projects configured to use Azure Active Directory as the identity provider, if this value is specified in the request, it must always be set to true.

2023-06-05

Data Modeling

Added

  • Added support for an autoCreateDirectRelations option on the endpoint for ingesting instances. This option lets the user specify whether to create missing target nodes of direct relations.

Removed

  • Removed support for the deprecated per-item sources field on the /instances/byids endpoint.

Time series

Added

  • Added advanced query language support (Beta).

Sequences

Added

  • Added advanced query language support (Beta).

2023-05-19

Transformations

Added

  • Adding support for data model centric and view centric schema.

2023-04-24

Transformations

Removed

  • Removing support for authentication via API keys when creating or updating transformations.

2023-05-04

Annotations

Added

  • Added image.InstanceLink and diagrams.InstanceLink annotation types to allow you to link from objects discovered in images and engineering diagrams to data model instances.

2023-04-18

All resources

Added

Changed

  • Updated the Parallel retrieval documentation.
  • Aligned endpoint naming within Assets, Data sets, Events, and Files.

Assets

Added

  • Added advanced query language support (Beta).
    • Advanced search, filtering, and sorting capabilities in the Filter assets endpoint.
    • Advanced aggregation capabilities in the Aggregate assets endpoint.

Events

Added

  • Added advanced query language support (Beta).
    • Advanced search, filtering, and sorting capabilities in the Filter events endpoint.
    • Advanced aggregation capabilities in the Aggregate events endpoint.

Documents

Added

  • Added advanced query language support (Beta).

2023-04-12

Sessions

Fixed

  • Fixed the API documentation for the request body of the POST /projects/{project}/sessions/byids endpoint. The documentation incorrectly stated the request body schema as specifying the list of session IDs to retrieve, in the form {"items": [42]} - it should in fact be {"items": [{"id": 42}]}. The documentation has been updated to reflect this.

  • Fixed the API documentation for the response body of the POST /projects/{project}/sessions/byids endpoint. The documentation incorrectly stated nextCursor and previousCursor fields as being returned from the response, which was not the case, and these fields have now been removed from the API documentation.

2023-04-04

Transformations

Change

  • Transformations support new target types for view-centric data model instances.

Added

  • Added target types nodes and edges.

2023-03-06

Documents

Change

  • Renamed "approximateCardinality" aggregate to "cardinalityValues" to unify the search spec in Cognite.
  • "uniqueProperties" aggregate no longer supports pagination. It returns unique properties (up to 10000) in the specified path. The results are sorted by frequency.

Added

  • Added "allUniqueProperties" aggregate that returns all unique properties. The response contains a cursor that can be used to fetch all pages of data.

2023-02-03

Seismic

Added

  • Batch downloading of seismics as a ZIP archive is now an experimental v1 endpoint. A user requires the experimental ACL to use this endpoint, and any other ACLs and scopes to read the downloadable seismics.

Fixed

  • The documentation for downloading seismics as SEG-Y files is part of v1. The API documentation didn't reflect that the endpoint had been promoted to version 1.

2023-02-07

Documents

Added

  • Added highlight field in the search endpoint to indicate whether matches in search results should be highlighted.

2023-01-18

3D

Added

  • Added support for using names filter in list nodes endpoint.

2023-01-17

Authentication

Removed

We've removed authentication via CDF service accounts and API keys, and user sign-in via /login.

3D

Added

  • Added support for storing translation and scale for model revision.

2023-01-12

Documents

Added

  • Added support for approximateCardinality aggregate.

2023-01-10

Documents

Added

  • Added the search leaf filter, to allow filtering by searching through specified properties.

2023-01-09

Documents

Added

  • Added the uniqueProperties aggregation, which can be used to find all the metadata keys in use.

2023-01-06

Documents

Added

  • Added inAssetSubtree filter to filter documents that have a related asset in a subtree rooted at any of the specified IDs.

2023-01-02

Documents

Added

  • Added advanced filters for metadata (prefix, in, equals)

2022-12-06

3D

Added

  • Added get3DNodesById endpoint to be able to fetch 3D nodes mapped to an asset.

2022-12-16

Time series

Changed

  • Timestamps of data points may now be as large as 4102444799999 (23:59:59.999, December 31, 2099). The previous limit was the year 2050.

2022-11-29

Events

Added

  • Added nulls field to the sort property specification

2022-11-17

Time series

Added

2022-10-14

Geospatial

Added

2022-10-11

Transformations

Added

  • Added capability to run a transformation with Nonce credentials provided through the Run endpoint.

2022-10-06

IAM (Identity and access management)

Added

2022-09-09

Vision (Contextualization)

Added

  • Move Vision extract service from playground to V1.

2022-08-12

Time series

Changed

2022-07-21

Transformations

Added

  • Added authentication using nonce for transformation's exisiting endpoints.

2022-06-21

Annotations (Data organization)

Added

  • Moved the annotation service from playground to v1.

2022-07-07

Events

Removed

2022-06-13

IAM (Identity and access management)

Added

2022-05-20

Documents

Added

  • Added the POST /documents/aggregate endpoint. The endpoint allows you to count documents optionally grouped by a property and also to retrieve all unique values of a property.

2022-05-12

Documents

Added

  • Added the POST /documents/list endpoint. The endpoint allows you to iterate through all the documents in a project.
  • Added the POST /documents/{documentId}/content endpoint. The endpoint lets you download the entire extracted plain text of a document.

2022-03-15

Sequences

Changed

  • Changed sequences column limits. Old limit of maximum total 200 columns limits is updated to maximum 400 total columns, maximum 400 numeric columns and maximum 200 string columns.

2022-03-02

Sequences

Added

2022-02-08

Time series

Changed

  • Marked isStep parameter to be editable (i.e. removed description stating it is not updatable) in POST /timeseries/create.

Added

2022-02-07

Documents

Added

2022-01-25

Documents

Added

2022-01-24

Time series

Added

  • Added optional ignoreUnknownIds parameter to POST /sequences/delete. Setting this to true will prevent the operation from failing if one or more of the given sequences do not exist; instead, those given sequences that do exist will be deleted.

2021-12-07

Transformations

Added

2021-11-22

Contextualization

Added

  • Added diagram detect endpoint to v1 to detect annotations in engineering diagrams
  • Added diagram detect results endpoint to v1 to get the results from an engineering diagram detect job
  • Added diagram convert endpoint to v1 to create interactive engineering diagrams in SVG format with highlighted annotations
  • Added diagram convert results endpoint to v1 to get the results for a job converting engineering diagrams to SVGs

2021-11-17

3D

Added

  • Added dataSetId support to 3D models enabling data access scoping of 3D data

2021-10-13

Raw

Changed

  • To align with Microsoft Azure clusters, table and database names are now sensitive to trailing spaces also in Google Cloud Platform clusters.

2021-10-05

Extraction Pipelines

Added

  • New Extraction Pipelines resource to document extractors and monitor the status of data ingestion to make sure reliable and trustworthy data are flowing into the CDF data sets.
  • API endpoints for creating, managing, and deleting extraction pipelines. Capture common attributes around extractors such as owners, contacts, schedule, destination RAW databases, and data set. Document structured metadata in the form of key-value attributes as well unstructured documentation attribute that supports Markdown (rendered as Markdown in Fusion).
  • Extraction Pipelines Runs are CDF objects to store statuses related to an extraction pipeline. The supported statuses are: success, failure and seen. They enable extractor developers to report status and error message after ingesting data. As well enables for reporting heartbeat through seen status by the extractor to easily identify issues related to crushed applications and scheduling issues.

2021-09-28

Sequences

Added

Time series

Added

2021-08-18

IAM (Identity and access management)

Added

Added sessions to v1. Sessions let you securely delegate access to CDF resources for CDF services (such as Functions) by an external principal and for an extended time.

2021-08-12

Relationships

Added

2021-07-01

3D

Added

  • Added filter3dNodes endpoint to allow for more advanced filtering on node metadata

2021-06-29

Labels

Added

2021-06-08

Sequences

Added

2021-06-01

Assets

Added

2021-04-28

Time series

Changed granularity limits on hour aggreagates

You can now ask for a granularity of up to 100000 hours (previously 48 hours), both in normal aggregates and in synthetic time series.

2021-04-12

IAM (Identity and access management)

Added

2021-04-06

Authentication

Deprecated

We are deprecating authentication via CDF service accounts and API keys, and user sign-in via /login, in favor of registering applications and services with your IdP (identity provider) and using OpenID Connect and the IdP framework to manage CDF access securely.

The legacy authentication flow is available for customers using Cognite Data Fusion (CDF) on GCP until further notice. We strongly encourage customers to adopt the new authentication flows as soon as possible.

The following API endpoints are deprecated:

  • /api/v1/projects/*/apikeys
  • /api/v1/projects/*/serviceaccounts
  • /login
  • /logout
  • /api/v1/projects/*/groups/serviceaccounts *

*only the sub-resources for listing, adding, and removing members of groups.

2021-03-22

CDF API 0.5, 0.6 reached their end-of-life after its initial deprecation announcement in Summer 2019.

2021-03-10

3D

Added

  • Added partition parameter to the List 3D Nodes endpoint for supporting parallel requests.
  • Added sortByNodeId parameter to the List 3D Nodes endpoint, improving request latency in most cases if set to true.

2021-02-26

Entity matching

Fixed

  • Fixed a bug in the documentation for Entity matching. The (job) status shall be capitalized string.

2020-12-22

Files

Added

  • New field fileType inside derivedFields to refer to a pre-defined subset of MIME types.
  • New filter fileType inside derivedFields to find files with a pre-defined subset of MIME types.

2020-10-20

Files

Added

  • New field geoLocation to refer to the geographic location of the file.
  • New filter geoLocation to find files matching a certain geographic location.

To learn how to leverage new geoLocation features, follow our guide.

2020-08-29

Files

Added

  • New field directory referring to the directory in the source containing the file.
  • New filter directoryPrefix allows you to find Files matching a certain directory prefix.

2020-08-05

Files

Added

  • New field labels allows you to attach labels to Files upon creation or updating.
  • New filter labels allows you to find Files that have been annotated with specific labels.

2020-07-08

IAM (Identity and access management)

Added

  • New project field applicationDomains. If this field is set, users only sign in to the project through applications hosted on a whitelisted domain. Read more.

2020-07-01

Events

Added

  • New aggregation uniqueValues allows you to find different types, subtypes of events in your project.

2020-06-29

Labels

Added

  • New data organization resource: labels. Manage terms that you can use to annotate and group assets.

Assets

Added

  • New filter labels allows you to find resources that have been annotated with specific labels.

Time series

Added

2020-04-28

Events

Added

  • New filtering capabilities to find open events endTime=null.
  • New filtering capabilities to find all events intersecting a timespan using activeAtTime.

2020-03-12

General

Added

  • New data organization resource: data sets. Document and track data lineage, ensure data integrity, and allow 3rd parties to write their insights securely back to your Cognite Data Fusion (CDF) project.
  • New attribute datasetId introduced in assets, files, events, time series and sequences.
  • New filter dataSetIds allows you to narrow down results to resources containing datasetId by a list of ids or externalIds of a data set. Supported by assets, files, events, time series and sequences.
  • We have added a new aggregation endpoint for time series. With this endpoint, you can find out how many results in a tenant meet the criteria of a filter. We will expand this feature to add more aggregates than count.

Groups

Added

  • Introduced a new capability: datasetsAcl for managing access to data set resources.
  • New scope datasetScope for assets, files, events, time series and sequences ACLs. Allows you to scope down access to resources contained within a specified set of data sets.

2020-03-10

3D

Fixed

  • We fixed a bug in the documentation of 3D model revisions. Applications should anticipate that 3D nodes may not have a bounding box.

2020-02-25

Assets

Added

  • We have added a new aggregation endpoint for assets. With this endpoint, you can find out how many assets in a tenant meet the criteria of a filter. We will expand this feature to add more aggregates than count.

Events

Added

  • We have added a new aggregation endpoint for events. With this endpoint, you can find out how many events in a tenant meet the criteria of a filter. We will expand this feature to add more aggregates than count.

2020-02-12

Assets

Added

  • We have added new aggregation properties: depth and path. You can use the properties in the filter and retrieve endpoints.

2020-02-10

Assets

Added

  • Added the property parentExternalId which is returned for all assets which have a parent with a defined externalId.

2019-12-09

General

Added

  • Added assetSubtreeIds as a parameter to filter, search, and list endpoints for all core resources. assetSubtreeIds allows you to specify assets that are subtree roots, and then only retrieve resources that are related to assets within those subtrees.

2019-12-04

Assets

Added

  • Added the ability to filter assets by parent external IDs.

2019-11-12

Access control

Removed

  • Groups can no longer be created with a permissions field in v0.5.

2019-10-31

Assets

Added

  • Asset search now has a search.query parameter. This uses an improved search algorithm that tries a wider range of variations of the input terms and gives much better relevancy ranking than the existing search.name and search.description fields.

Time Series

Changed

  • The search.query parameter for time series search now uses an improved search algorithm that tries a wider range of variations of the input terms, and gives much better relevancy ranking.

2019-10-23

Files

Added

  • Added support for updating the mimeType for existing files in files/update requests.

2019-10-18

Time Series

Added

  • Time series expanded their filtering capabilities with new Filter time series endpoint, allowing for additional filtering by:

    • Name
    • Unit
    • Type of time series: string or step series
    • Metadata objects
    • ExternalId prefix filtering
    • Create and last updated time ranges

    Endpoint in addition support pagination and partitioning. Check out detailed API documentation here.

2019-10-02

Sequences

Added

  • Introducing the new sequences core resource type that lets you store numerically indexed multi-column rows of data. Connect your sequences to physical assets and to their source systems through externalId and metadata support. Read more here.

2019-09-30

3D

Added

  • Added endpoint to get multiple nodes for a 3D model by their IDs.
  • Added endpoint to get asset mappings for multiple node IDs or asset IDs.

2019-09-23

Files

Added

  • Added support for filter on rootAssetIds in files GET /files (using query parameter) and POST /files/list (in request body).

2019-09-16

Assets and Events

Added

  • Added support for partition in /assets and /events to support parallel retrieval. See guide for usage here

2019-08-22

3D

Added

  • Added the query parameter intersectsBoundingBox to the list asset mappings endpoint. The parameter filters asset mappings to the assets where the bounding box intersects (or is contained within) the specified bounding box.

2019-08-21

Files

Added

  • Added support for sourceCreatedTime and sourceModifiedTime fields in files v1 endpoints.

Assets

Added

  • Allow the parent asset ID to be updated. The root asset ID must be preserved, and you can not convert a non-root asset to a root asset or vice versa.
  • Support for ignoreUnknownIds when deleting assets.

2019-08-15

3D

Added

  • Properties field for 3D nodes, extracted from uploaded 3D files.
  • Ability to filter nodes with a specific set of properties.

2019-07-24

Files

Changed

  • Allow lookup of names with length up to 256 characters (was 50) for GET /files and POST /files/search operations.
  • Allow creating and retrieving files with mimeType length up to 256 characters (was 64).

2019-07-15

Time series

Added

  • Added query parameter rootAssetIds to list time series endpoint. Returns time series that are linked to an asset that has one of the root assets as an ancestor.

2019-07-11

List of changes for initial API v1 release in comparison to previous version - API 0.5

General

Added

  • Support for externalId added across resource types. externalId lets you define a unique ID for a data object. Learn more: External IDs
  • externalIdPrefix added as a parameter to the list events, assets and files operations.
  • Richer filtering on the list assets, files and events operations.
  • Search, list and filter operations for assets, events and files now support filtering on source and metadata field values.

Changed

  • Core resources standardize on HTTP methods and URI naming for common operations such as search, partial updates, delete, list and filter
  • API responses are no longer wrapped in a top level data object.
  • Standardized pagination across resources through limit, cursor and nextCursor parameters.
  • The limit parameter no longer implicitly rounds down requested page size to maximum page size.
  • Standardized error responses and codes across all resources. Errors across CDF can be parsed into a single model.
  • Overall improvements to reference documentation. Including documented input constraints, required fields, individual attribute descriptions.

Removed

  • The sourceId field has been removed from resources. Use externalId instead of sourceId+source to define unique IDs for data objects.
  • Sorting is removed from the search operations for files, assets, events and time series. Results are sorted by relevance.
  • offset and previousCursor parameters are no longer supported for pagination across resources.
  • Fetching an asset subtree is no longer supported by files, assets, events and time series.

Assets

Added

  • Ability to select only root assets though new root filter.
  • Added the rootId field to specify the top element in an asset hierarchy.
  • Added the ability to filter by the root asset ID. This allows you to scope queries for one or many asset hierarchies.
  • List Assets allows for filtering assets belonging to set of root assets, specified by list of asset internal ids. New query parameter: rootIds.
  • Filter and Search Assets allows or filtering assets belonging to a set of root assets, specified by combination of internal and external asset identifiers. New body attribute: rootIds.

Changed

  • Updating a single asset is no longer supported through a separate endpoint. Use the update multiple endpoint instead.
  • Delete assets by default removes only leaf assets (assets without children). New parameter 'recursive' allows for enabling recursive removal of the entire subtree of the asset pointed by ID (API 0.5 behaviour).

Removed

  • Overwriting assets is no longer supported.
  • Filtering assets by their complete description is no longer supported.
  • Locating assets fuzzily by name has been removed. Instead, search for assets on the name property.
  • When searching assets, querying over both name and description in the same query is no longer supported.
  • The experimental query parameter boostName has been removed from the search for assets operation.
  • Removed the path and depth fields.

Events

Added

  • Events can now be filtered on asset ID in combination with other filters.
  • New filter rootAssetIds allows for narrowing down events belonging only to list or specified root assets. Supported by Filter and Search API

Removed

  • Events can no longer be filtered by empty description.
  • The 'dir' parameter has been removed from the search events operation.

Files

Added

  • Filtering files by assetIds in list files operations now support multiple assets in the same request.

Changed

  • Download file content has changed from HTTP GET to HTTP POST method.
  • We have renamed the fileType field to mimeType. The field now requires a MIME formatted string (e.g. text/plain).
  • We have renamed the uploadedAt field to uploadedTime.
  • Resumable is now the default behavior for file uploads.
  • Update metadata for single files is no longer supported by a separate operation. Instead, use the update multiple operation.

Removed

  • Replace files metadata endpoint has been removed.
  • Directory has been removed as a property of files.
  • Updating the name or mimeType of a file through the update multiple files operation is no longer supported.
  • Query parameter for specifying the sort direction has been removed from list all files operations.

Raw

Changed

  • Raw has changed structure to become resource-oriented. The URL structure has changed.
  • Recursively delete of tables and rows when deleting a database is now the default behavior without a control parameter.

Time series

Added

  • Support for adding datapoints by id and externalId of time series. Adding datapoints to time series by name has been removed.
  • Add ability to update the new externalId attribute for time series.
  • Allow to set externalId during creation of time series. ExternalId requires uniqueness across time series.
  • Consolidate multiple APIs to allow adding datapoints into a single endpoint. Allows datapoints to be added to multiple time series at the same time.
  • Retrieve data points by using id and externalId of the time series.
  • Time series created through API v1 are not discoverable by API 0.3, 0.4, 0.5 and 0.6 by default. Introduce the option to enable this compatibility by setting new attribute - legacyName on time series creation. Value is required to be unique.

Changed

  • Get latest datapoints has been reworked. Introduces support for id and externalId lookup as well retrieval for multiple time series within the same request.
  • Time series name is no longer limited by uniqueness. Note that time series (meta objects) created by API v1 will not be discoverable by older API versions.
  • Delete time series endpoint has been redesigned to allow deletion of multiple time series by id and externalId.
  • Delete single and multiple datapoints endpoint has been redesigned and consolidated into a single endpoint. New delete allows selection of multiple time series and time ranges by id and externalId. Selecting by name is no longer available.
  • Update multiple time series restructured to support lookup by externalId.
  • Retrieve time series by ID endpoint restructured adding the ability to get time series by externalId.
  • Set limit for data point value to min -1E100, max 1E100.

Removed

  • Experimental feature for performing calculations across multiple time series (synthetic time series), function and alias attributes are no longer available.
  • The experimental query parameter boostName has been removed from search operation.
  • Short names for aggregate functions are no longer supported.
  • Ability to remove time series by name have been removed as names are no longer unique identifiers.
  • Select multiple time series and time ranges by name is no longer available.
  • The ability to update isString and isStep attributes is removed. The attributes are not intended to be modified after creation of time series.
  • The endpoint for updating single time series is removed. Use the update multiple time series endpoint instead.
  • Remove ability to overwrite time series object by id. Use the update multiple time series endpoint instead.
  • The ability to retrieve time series matching by name has been removed. Use externalId instead.
  • The ability to retrieve by id from a single time series has been removed. Use retrieve multiple datapoints for multiple time series instead.
  • The ability to retrieve time-aligned datapoints through "dataframe" API has been removed. Similar functionality is available through our supported SDKs.
  • The ability to add datapoints to time series by name has been removed.
  • The ability to look up by time series name has been removed.

IAM (Identity and access management)

Added

  • The login status endpoint includes the ID of the API key making the request (new attribute: apiKeyId), if the request used an API key.

Changed

  • The user resource type has been replaced with service accounts. Users from previous API versions are equivalent to service accounts.
  • Adding, listing and removing users from a group has been replaced by equivalent operations for service accounts.
  • Retrieve project returns a single object instead of a list.
  • API keys endpoints for list/create rename userId attribute to serviceAccountId.

Removed

  • List and create groups no longer use the permissions and source attributes.

3D

Added

  • New 3D API lets you upload and process 3D models. Supported format: FBX.
  • Ability to create and maintain multiple revisions for the 3D models.
  • API for mapping relationships between 3D model nodes and asset hierarchy.

Projects

Projects are used to isolate data in CDF from each other. All objects in CDF belong to a single project, and objects in different projects are generally isolated from each other.

Create a project

Creates new projects given project details. This functionality is currently only available for Cognite and re-sellers of Cognite Data Fusion. Please contact Cognite Support for more information.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of new project specifications

required
Array of objects (NewProjectSpec)

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "name": "Open Industrial Data",
  • "urlName": "publicdata"
}

List projects

The list of all projects that the user has the 'list projects' capability in. The user may not have access to any resources in the listed projects, even if they have access to list the project itself.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code

Responses

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Retrieve a project

Retrieves information about a project given the project URL name.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
path Parameters
projectName
required
string
Example: publicdata

The CDF project name, equal to the project variable in the server URL.

Responses

Request samples

const projectInfo = await client.projects.retrieve('publicdata');

Response samples

Content type
application/json
{
  • "name": "Open Industrial Data",
  • "urlName": "publicdata",
  • "defaultGroupId": 123871937,
  • "authentication": {
    • "validDomains": [
      ],
    • "applicationDomains": [
      ]
    },
  • "oidcConfiguration": {
    • "jwksUrl": "string",
    • "tokenUrl": "string",
    • "issuer": "string",
    • "audience": "string",
    • "skewMs": 0,
    • "accessClaims": [
      ],
    • "scopeClaims": [
      ],
    • "logClaims": [
      ],
    • "isGroupCallbackEnabled": false,
    • "identityProviderScope": "string"
    }
}

Update a project

Updates the project configuration.

Warning: Updating a project will invalidate active sessions within that project.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
path Parameters
projectName
required
string
Example: publicdata

The CDF project name, equal to the project variable in the server URL.

Request Body schema: application/json

Object with updated project configuration.

required
object (ProjectUpdateObjectDTO)

Contains the instructions on how to update the project. Note: azureADConfiguration, oidcConfiguration and oAuth2Configuration are mutually exclusive

Responses

Request samples

Content type
application/json
{
  • "update": {
    • "name": {
      },
    • "defaultGroupId": {
      },
    • "validDomains": {
      },
    • "applicationDomains": {
      },
    • "authenticationProtocol": {
      },
    • "azureADConfiguration": {
      },
    • "oAuth2Configuration": {
      },
    • "oidcConfiguration": {
      }
    }
}

Response samples

Content type
application/json
{
  • "name": "Open Industrial Data",
  • "urlName": "publicdata",
  • "defaultGroupId": 123871937,
  • "authentication": {
    • "validDomains": [
      ],
    • "applicationDomains": [
      ]
    },
  • "oidcConfiguration": {
    • "jwksUrl": "string",
    • "tokenUrl": "string",
    • "issuer": "string",
    • "audience": "string",
    • "skewMs": 0,
    • "accessClaims": [
      ],
    • "scopeClaims": [
      ],
    • "logClaims": [
      ],
    • "isGroupCallbackEnabled": false,
    • "identityProviderScope": "string"
    }
}

Groups

Groups are used to give principals the capabilities to access CDF resources. One principal can be a member in multiple groups and one group can have multiple members. Note that having more than 20 groups per principal is not supported and may result in login issues.

Create groups

Creates one or more named groups, each with a set of capabilities.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of groups to create.

required
Array of objects (GroupSpec)

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Delete groups

Deletes the groups with the given IDs.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of group IDs to delete

items
required
Array of integers <int64> non-empty unique [ items <int64 > ]

Responses

Request samples

Content type
application/json
{
  • "items": [
    • 23872937137,
    • 1238712837,
    • 128371973
    ]
}

Response samples

Content type
application/json
{ }

List groups

Retrieves a list of groups the asking principal a member of. Principals with groups:list capability can optionally ask for all groups in a project.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
query Parameters
all
boolean
Default: false

Whether to get all groups, only available with the groups:list acl.

Responses

Request samples

const groups = await client.groups.list({ all: true });

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Security categories

Manage security categories for a specific project. Security categories can be used to restrict access to a resource. Applying a security category to a resource means that only principals (users or service accounts) that also have this security category can access the resource. To learn more about security categories please read this page.

Create security categories

Creates security categories with the given names. Duplicate names in the request are ignored. If a security category with one of the provided names exists already, then the request will fail and no security categories are created.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of categories to create

required
Array of objects (SecurityCategorySpecDTO) non-empty

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Delete security categories

Deletes the security categories that match the provided IDs. If any of the provided IDs does not belong to an existing security category, then the request will fail and no security categories are deleted.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of security category IDs to delete.

items
required
Array of integers <int64> non-empty unique [ items <int64 > ]

Responses

Request samples

Content type
application/json
{
  • "items": [
    • 23872937137,
    • 1238712837,
    • 128371973
    ]
}

Response samples

Content type
application/json
{ }

List security categories

Retrieves a list of all security categories for a project.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
query Parameters
sort
string
Default: "ASC"
Enum: "ASC" "DESC"

Sort descending or ascending.

cursor
string

Cursor to use for paging through results.

limit
integer <int32> <= 1000
Default: 25

Return up to this many results. Maximum is 1000. Default is 25.

Responses

Request samples

const securityCategories = await client.securityCategories.list({ sort: 'ASC' });

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "nextCursor": "string"
}

Sessions

Sessions are used to maintain access to CDF resources for an extended period of time. The methods available to extend a sessions lifetime are client credentials and token exchange. Sessions depend on the project OIDC configuration and may become invalid in the following cases

  • Project OIDC configuration has been updated through the update project endpoint. This action invalidates all of the project's sessions.

  • The session was invalidated through the identity provider.

Create sessions

Create sessions

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

A request containing the information needed to create a session.

Array of CreateSessionWithClientCredentialsRequest (object) or CreateSessionWithTokenExchangeRequest (object) (CreateSessionRequest) = 1 items

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

List sessions

List all sessions in the current project.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
query Parameters
status
string
Enum: "ready" "active" "cancelled" "revoked" "access_lost"

If given, only sessions with the given status are returned.

cursor
string

Cursor to use for paging through results.

limit
integer <int32> <= 1000
Default: 25

Return up to this many results. Maximum is 1000. Default is 25.

Responses

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "nextCursor": "string",
  • "previousCursor": "string"
}

Retrieve sessions with given IDs

Retrieves sessions with given IDs. The request will fail if any of the IDs does not belong to an existing session.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of session IDs to retrieve

required
Array of objects [ 1 .. 1000 ] items

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Revoke sessions

Revoke access to a session. Revocation of a session may in some cases take up to 1 hour to take effect.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

A request containing the information needed to revoke sessions.

Array of objects (RevokeSessionRequest)

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Token

Access tokens issued by an IdP (Azure AD, Google, etc.) are used to access CDF resources.

Inspect

Inspect CDF access granted to an IdP issued token

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code

Responses

Response samples

Content type
application/json
{
  • "subject": "string",
  • "projects": [
    • {
      }
    ],
  • "capabilities": [
    • {
      }
    ]
}

User profiles

User profiles is an authoritative source of core user profile information (email, name, job title, etc.) for principals based on data from the identity provider configured for the CDF project.

User profiles are first created (usually within a few seconds) when a principal issues a request against a CDF API. We currently don't support automatic exchange of user identity information between the identity provider and CDF, but the profile data is updated regularly with the latest data from the identity provider for the principals issuing requests against a CDF API.

Note that the user profile data is mutable, and any updates in the external identity provider may also cause updates in this API. Therefore, you cannot use profile data, for example a user's email, to uniquely identify a principal. The exception is the userIdentifier property which is guaranteed to be immutable.

Get the user profile of the principal issuing the request

Retrieves the user profile of the principal issuing the request. If a principal doesn't have a user profile, you get a not found (404) response code.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code

Responses

Response samples

Content type
application/json
{
  • "userIdentifier": "abcd",
  • "givenName": "Jane",
  • "surname": "Doe",
  • "email": "jane.doe@example.com",
  • "displayName": "Jane Doe",
  • "jobTitle": "Software Engineer",
  • "lastUpdatedTime": 0
}

List all user profiles

List all user profiles in the current project. This operation supports pagination by cursor. The results are ordered alphabetically by name.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
query Parameters
limit
integer [ 1 .. 1000 ]
Default: 25

Limits the number of results to be returned. The server returns no more than 1000 results even if the specified limit is larger. The default limit is 25.

cursor
string
Example: cursor=4zj0Vy2fo0NtNMb229mI9r1V3YG5NBL752kQz1cKtwo

Cursor for paging through results.

Responses

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "nextCursor": "string"
}

Retrieve one or more user profiles by ID

Retrieve one or more user profiles indexed by the user identifier in the same CDF project.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

Specify a maximum of 1000 unique IDs.

Array of objects (UserIdentifier)

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Search user profiles

Search user profiles in the current project. The result set ordering and match criteria threshold may change over time. This operation does not support pagination.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

Query for user profile search.

object
limit
integer <int32> [ 1 .. 1000 ]
Default: 25

<- Limits the maximum number of results to be returned by single request. The default is 25.

Responses

Request samples

Content type
application/json
{
  • "search": {
    • "name": "string"
    },
  • "limit": 25
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Assets

The assets resource type stores digital representations of objects or groups of objects from the physical world. Assets are organized in hierarchies. For example, a water pump asset can be a part of a subsystem asset on an oil platform asset.

Rate and concurrency limits

Rate and concurrency limits apply to some of the endpoints. If a request exceeds one of the limits, it will be throttled with a 429: Too Many Requests response. More on limit types and how to avoid being throttled is described here.

Following limits apply to the List assets, Filter assets, Aggregate assets and Search assets endpoints. These limits apply to all endpoints simultaneously, i.e. requests made to different endpoints are counted together. Please note the additional conditions that apply to the Aggregate assets endpoint, as this endpoint provides the most resource-consuming operations.

Limit Per project Per user (identity)
Rate 30 rps total
out of which no more than 15 rps to Aggregate
20 rps
out of which no more than 10 rps to Aggregate
Concurrency 15 parallel requests
out of which no more than 6 to Aggregate
10 parallel requests
out of which no more than 4 to Aggregate

Aggregate assets

The aggregation API lets you compute aggregated results on assets, such as getting the count of all assets in a project, checking different names and descriptions of assets in your project, etc.

Aggregate filtering

Filter (filter & advancedFilter) data for aggregates

Filters behave the same way as for the Filter assets endpoint. In text properties, the values are aggregated in a case-insensitive manner.

aggregateFilter to filter aggregate results

aggregateFilter works similarly to advancedFilter but always applies to aggregate properties. For instance, in case of an aggregation for the source property, only the values (aka buckets) of the source property can be filtered out.

Request throttling

This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage.
It is a subject of the new throttling schema (limited request rate and concurrency). Please check Assets resource description for more information.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json
One of
aggregate
string
Value: "count"

Type of aggregation to apply. count: Get approximate number of Assets matching the filters.

(BoolFilter (and (object) or or (object) or not (object))) or (LeafFilter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object)))

A filter DSL (Domain Specific Language) to define advanced filter queries.

See more information about filtering DSL here.

Supported properties:

Property Type
["labels"] array of [string]
["createdTime"] number
["dataSetId"] number
["id"] number
["lastUpdatedTime"] number
["parentId"] number
["rootId"] number
["description"] string
["externalId"] string
["metadata"] string
["metadata", "someCustomKey"] string
["name"] string
["source"] string

Note: Filtering on the ["metadata"] property has the following logic: If a value of any metadata keys in an asset matches the filter, the asset matches the filter.

object (Filter)

Filter on assets with strict matching.

Responses

Request samples

Content type
application/json
Example
{
  • "aggregate": "count",
  • "advancedFilter": {
    • "or": [
      ]
    },
  • "filter": {
    • "name": "string",
    • "parentIds": [
      ],
    • "parentExternalIds": [
      ],
    • "rootIds": [
      ],
    • "assetSubtreeIds": [
      ],
    • "dataSetIds": [
      ],
    • "metadata": {
      },
    • "source": "string",
    • "createdTime": {
      },
    • "lastUpdatedTime": {
      },
    • "root": true,
    • "externalIdPrefix": "my.known.prefix",
    • "labels": {
      },
    • "geoLocation": {
      }
    }
}

Response samples

Content type
application/json
Example
{
  • "items": [
    • {
      }
    ]
}

Create assets

You can create a maximum of 1000 assets per request.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of the assets to create. You can create a maximum of 1000 assets per request.

required
Array of objects (DataExternalAssetItem) [ 1 .. 1000 ] items

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Delete assets

Delete assets. By default, recursive=false and the request would fail if attempting to delete assets that are referenced as parent by other assets. To delete such assets and all its descendants, set recursive to true. The limit of the request does not include the number of descendants that are deleted.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json
required
Array of AssetInternalId (object) or AssetExternalId (object) (AssetIdEither) [ 1 .. 1000 ] items
recursive
boolean
Default: false

Recursively delete all asset subtrees under the specified IDs.

ignoreUnknownIds
boolean
Default: false

Ignore IDs and external IDs that are not found

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "recursive": false,
  • "ignoreUnknownIds": false
}

Response samples

Content type
application/json
{ }

Filter assets

Retrieve a list of assets in the same project. This operation supports pagination by cursor. Apply Filtering and Advanced filtering criteria to select a subset of assets.

Advanced filtering

Advanced filter lets you create complex filtering expressions that combine simple operations, such as equals, prefix, exists, etc., using boolean operators and, or, and not. It applies to basic fields as well as metadata.

See the advancedFilter attribute in the example.

See more information about filtering DSL here.

Supported leaf filters

Leaf filter Supported fields Description
containsAll Array type fields Only includes results which contain all of the specified values.
{"containsAll": {"property": ["property"], "values": [1, 2, 3]}}
containsAny Array type fields Only includes results which contain at least one of the specified values.
{"containsAny": {"property": ["property"], "values": [1, 2, 3]}}
equals Non-array type fields Only includes results that are equal to the specified value.
{"equals": {"property": ["property"], "value": "example"}}
exists All fields Only includes results where the specified property exists (has value).
{"exists": {"property": ["property"]}}
in Non-array type fields Only includes results that are equal to one of the specified values.
{"in": {"property": ["property"], "values": [1, 2, 3]}}
prefix String type fields Only includes results which start with the specified value.
{"prefix": {"property": ["property"], "value": "example"}}
range Non-array type fields Only includes results that fall within the specified range.
{"range": {"property": ["property"], "gt": 1, "lte": 5}}
Supported operators: gt, lt, gte, lte
search ["name"], ["description"] Introduced to provide functional parity with /assets/search endpoint.
{"search": {"property": ["property"], "value": "example"}}

The search leaf filter provides functional parity with the /assets/search endpoint. It's available only for the ["description"] and ["name"] properties. When specifying only this filter with no explicit ordering, behavior is the same as of the /assets/search/ endpoint without specifying filters. Explicit sorting overrides the default ordering by relevance. It's possible to use the search leaf filter as any other leaf filter for creating complex queries.

See the search filter in the advancedFilter attribute in the example.

advancedFilter attribute limits

  • filter query max depth: 10
  • filter query max number of clauses: 100
  • and and or clauses must have at least one element
  • property array of each leaf filter has the following limitations:
    • number of elements in the array is in the range [1, 2]
    • elements must not be blank
    • each element max length is 128 symbols
    • property array must match one of the existing properties (static or dynamic metadata).
  • containsAll, containsAny, and in filter values array size must be in the range [1, 100]
  • containsAll, containsAny, and in filter values array must contain elements of a primitive type (number, string)
  • range filter must have at least one of gt, gte, lt, lte attributes. But gt is mutually exclusive to gte, while lt is mutually exclusive to lte. At least one of the bounds must be specified.
  • gt, gte, lt, lte in the range filter must be a primitive value
  • search filter value must not be blank and the length must be in the range [1, 128]
  • filter query may have maximum 2 search leaf filters
  • maximum leaf filter string value length is different depending on the property the filter is using:
    • externalId - 255
    • name - 128 for the search filter and 255 for other filters
    • description - 128 for the search filter and 255 for other filters
    • labels item - 255
    • source - 128
    • any metadata key - 128

Sorting

By default, assets are sorted by id in the ascending order. Use the search leaf filter to sort the results by relevance. Sorting by other fields can be explicitly requested. The order field is optional and defaults to desc for _score_ and asc for all other fields. The nulls field is optional and defaults to auto. auto is translated to last for the asc order and to first for the desc order by the service. Partitions are done independently of sorting; there's no guarantee of the sort order between elements from different partitions.

See the sort attribute in the example.

Null values

In case the nulls attribute has the auto value or the attribute isn't specified, null (missing) values are considered to be bigger than any other values. They are placed last when sorting in the asc order and first when sorting in desc. Otherwise, missing values are placed according to the nulls attribute (last or first), and their placement doesn't depend on the order value. Values, such as empty strings, aren't considered as nulls.

Sorting by score

Use a special sort property _score_ when sorting by relevance. The more filters a particular asset matches, the higher its score is. This can be useful, for example, when building UIs. Let's assume we want exact matches to be be displayed above matches by prefix as in the request below. An asset named pump will match both equals and prefix filters and, therefore, have higher score than assets with names like pump valve that match only prefix filter.

"advancedFilter" : {
  "or" : [
    {
      "equals": {
        "property": ["name"], 
        "value": "pump"
      }
    },
    {
      "prefix": {
        "property": ["name"], 
        "value": "pump"
      }
    }
  ]
},
"sort": [
  {
    "property" : ["_score_"]
  }
]

Request throttling

This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage. It is a subject of the new throttling schema (limited request rate and concurrency). Please check Assets resource description for more information.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json
object (Filter)

Filter on assets with strict matching.

(BoolFilter (and (object) or or (object) or not (object))) or (LeafFilter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object)))

A filter DSL (Domain Specific Language) to define advanced filter queries.

See more information about filtering DSL here.

Supported properties:

Property Type
["labels"] array of [string]
["createdTime"] number
["dataSetId"] number
["id"] number
["lastUpdatedTime"] number
["parentId"] number
["rootId"] number
["description"] string
["externalId"] string
["metadata"] string
["metadata", "someCustomKey"] string
["name"] string
["source"] string

Note: Filtering on the ["metadata"] property has the following logic: If a value of any metadata keys in an asset matches the filter, the asset matches the filter.

limit
integer <int32> [ 1 .. 1000 ]
Default: 100

Limits the number of results to return.

Array of objects (AssetSortProperty) [ 1 .. 2 ] items

Sort by array of selected properties.

cursor
string
aggregatedProperties
Array of strings (AggregatedProperty)
Items Enum: "childCount" "path" "depth"

Set of aggregated properties to include

partition
string (Partition)

Splits the data set into N partitions. The attribute is specified as a "M/N" string, where M is a natural number in the interval of [1, N]. You need to follow the cursors within each partition in order to receive all the data.

To prevent unexpected problems and maximize read throughput, you should at most use 10 (N <= 10) partitions.

When using more than 10 partitions, CDF may reduce the number of partitions silently. For example, CDF may reduce the number of partitions to K = 10 so if you specify an X/N partition value where X = 8 and N = 20 - i.e. "partition": "8/20"- then CDF will change N to N = K = 10 and process the request. But if you specify the X/N partition value where X = 11 (X > K) and N = 20 - i.e. "partition": "11/20"- then CDF will reply with an empty result list and no cursor in the response.

In future releases of the resource APIs, Cognite may reject requests if you specify more than 10 partitions. When Cognite enforces this behavior, the requests will result in a 400 Bad Request status.

Responses

Request samples

Content type
application/json
{
  • "filter": {
    • "name": "string",
    • "parentIds": [
      ],
    • "parentExternalIds": [
      ],
    • "rootIds": [
      ],
    • "assetSubtreeIds": [
      ],
    • "dataSetIds": [
      ],
    • "metadata": {
      },
    • "source": "string",
    • "createdTime": {
      },
    • "lastUpdatedTime": {
      },
    • "root": true,
    • "externalIdPrefix": "my.known.prefix",
    • "labels": {
      },
    • "geoLocation": {
      }
    },
  • "advancedFilter": {
    • "or": [
      ]
    },
  • "limit": 100,
  • "sort": [
    • {
      },
    • {
      }
    ],
  • "cursor": "4zj0Vy2fo0NtNMb229mI9r1V3YG5NBL752kQz1cKtwo",
  • "aggregatedProperties": [
    • "childCount"
    ],
  • "partition": "1/10"
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "nextCursor": "string"
}

List assets

List all assets, or only the assets matching the specified query.

Request throttling

This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage. It is a subject of the new throttling schema (limited request rate and concurrency). Please check Assets resource description for more information.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
query Parameters
limit
integer [ 1 .. 1000 ]
Default: 100

Limits the number of results to be returned. The maximum results returned by the server is 1000 even if you specify a higher limit.

cursor
string
Example: cursor=4zj0Vy2fo0NtNMb229mI9r1V3YG5NBL752kQz1cKtwo

Cursor for paging through results.

includeMetadata
boolean
Default: true

Whether the metadata field should be returned or not.

name
string (AssetName) [ 1 .. 140 ] characters

The name of the asset.

parentIds
string <jsonArray(int64)> (JsonArrayInt64)
Example: parentIds=[363848954441724, 793045462540095, 1261042166839739]

List only assets that have one of the parentIds as a parent. The parentId for root assets is null.

parentExternalIds
string <jsonArray(string)> (JsonArrayString)
Example: parentExternalIds=[externalId_1, externalId_2, externalId_3]

List only assets that have one of the parentExternalIds as a parent. The parentId for root assets is null.

rootIds
string <jsonArray(int64)> (JsonArrayInt64)
Deprecated
Example: rootIds=[363848954441724, 793045462540095, 1261042166839739]

This parameter is deprecated. Use assetSubtreeIds instead. List only assets that have one of the rootIds as a root asset. A root asset is its own root asset.

assetSubtreeIds
string <jsonArray(int64)> (JsonArrayInt64)
Example: assetSubtreeIds=[363848954441724, 793045462540095, 1261042166839739]

List only assets that are in a subtree rooted at any of these assetIds (including the roots given). If the total size of the given subtrees exceeds 100,000 assets, an error will be returned.

assetSubtreeExternalIds
string <jsonArray(string)> (JsonArrayString)
Example: assetSubtreeExternalIds=[externalId_1, externalId_2, externalId_3]

List only assets that are in a subtree rooted at any of these assetExternalIds. If the total size of the given subtrees exceeds 100,000 assets, an error will be returned.

source
string <= 128 characters

The source of the asset, for example which database it's from.

root
boolean
Default: false

Whether the filtered assets are root assets, or not. Set to True to only list root assets.

minCreatedTime
integer <int64> (EpochTimestamp) >= 0

The number of milliseconds since 00:00:00 Thursday, 1 January 1970, Coordinated Universal Time (UTC), minus leap seconds.

maxCreatedTime
integer <int64> (EpochTimestamp) >= 0

The number of milliseconds since 00:00:00 Thursday, 1 January 1970, Coordinated Universal Time (UTC), minus leap seconds.

minLastUpdatedTime
integer <int64> (EpochTimestamp) >= 0

The number of milliseconds since 00:00:00 Thursday, 1 January 1970, Coordinated Universal Time (UTC), minus leap seconds.

maxLastUpdatedTime
integer <int64> (EpochTimestamp) >= 0

The number of milliseconds since 00:00:00 Thursday, 1 January 1970, Coordinated Universal Time (UTC), minus leap seconds.

externalIdPrefix
string (CogniteExternalIdPrefix) <= 255 characters
Example: externalIdPrefix=my.known.prefix

Filter by this (case-sensitive) prefix for the external ID.

partition
string
Example: partition=1/10

Splits the data set into N partitions. The attribute is specified as a "M/N" string, where M is a natural number in the interval of [1, N]. You need to follow the cursors within each partition in order to receive all the data.

To prevent unexpected problems and maximize read throughput, you should at most use 10 (N <= 10) partitions.

When using more than 10 partitions, CDF may reduce the number of partitions silently. For example, CDF may reduce the number of partitions to K = 10 so if you specify an X/N partition value where X = 8 and N = 20 - i.e. "partition": "8/20"- then CDF will change N to N = K = 10 and process the request. But if you specify the X/N partition value where X = 11 (X > K) and N = 20 - i.e. "partition": "11/20"- then CDF will reply with an empty result list and no cursor in the response.

In future releases of the resource APIs, Cognite may reject requests if you specify more than 10 partitions. When Cognite enforces this behavior, the requests will result in a 400 Bad Request status.

Responses

Request samples

const assets = await client.assets.list({ filter: { name: '21PT1019' } });

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "nextCursor": "string"
}

Retrieve an asset by its ID

Retrieve an asset by its ID. If you want to retrieve assets by externalIds, use Retrieve assets instead.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
path Parameters
id
required
integer <int64> (CogniteInternalId) [ 1 .. 9007199254740991 ]

A server-generated ID for the object.

Responses

Request samples

const assets = await client.assets.retrieve([{id: 123}, {externalId: 'abc'}]);

Response samples

Content type
application/json
{
  • "createdTime": 0,
  • "lastUpdatedTime": 0,
  • "rootId": 1,
  • "aggregates": {
    • "childCount": 0,
    • "depth": 0,
    • "path": [
      ]
    },
  • "parentId": 1,
  • "parentExternalId": "my.known.id",
  • "externalId": "my.known.id",
  • "name": "string",
  • "description": "string",
  • "dataSetId": 1,
  • "metadata": {
    • "property1": "string",
    • "property2": "string"
    },
  • "source": "string",
  • "labels": [
    • {
      }
    ],
  • "geoLocation": {
    • "type": "Feature",
    • "geometry": {
      },
    • "properties": { }
    },
  • "id": 1
}

Retrieve assets

Retrieve assets by IDs or external IDs. If you specify to get aggregates, then be aware that the aggregates are eventually consistent.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

All provided IDs and external IDs must be unique.

required
Array of AssetInternalId (object) or AssetExternalId (object) (AssetIdEither) [ 1 .. 1000 ] items
ignoreUnknownIds
boolean
Default: false

Ignore IDs and external IDs that are not found

aggregatedProperties
Array of strings (AggregatedProperty)
Items Enum: "childCount" "path" "depth"

Set of aggregated properties to include

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "ignoreUnknownIds": false,
  • "aggregatedProperties": [
    • "childCount"
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Search assets

Fulltext search for assets based on result relevance. Primarily meant for human-centric use-cases, not for programs, since matching and ordering may change over time. Additional filters can also be specified. This operation doesn't support pagination.

Request throttling

This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage. It is a subject of the new throttling schema (limited request rate and concurrency). Please check Assets resource description for more information.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

Search query

object (Filter)

Filter on assets with strict matching.

limit
integer <int32> [ 1 .. 1000 ]
Default: 100

Limits the number of results to return.

object (Search)

Fulltext search for assets. Primarily meant for for human-centric use-cases, not for programs. The query parameter uses a different search algorithm than the deprecated name and description parameters, and will generally give much better results.

Responses

Request samples

Content type
application/json
{
  • "filter": {
    • "parentIds": [
      ]
    },
  • "search": {
    • "name": "flow",
    • "description": "upstream"
    }
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Update assets

Update the attributes of assets.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

All provided IDs and external IDs must be unique. Fields that aren't included in the request aren't changed.

required
Array of AssetChangeById (object) or AssetChangeByExternalId (object) (AssetChange) [ 1 .. 1000 ] items

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Time series

A time series consists of a sequence of data points connected to a single asset. For example, a water pump asset can have a temperature time series that records a data point in units of °C every second.

A single asset can have several time series. The water pump could have additional time series measuring pressure within the pump, rpm, flow volume, power consumption, and more.Time series store data points as either numbers or strings. This is controlled by the is_string flag on the time series object. Numerical data points can be aggregated before they are returned from a query (e.g., to find the average temperature for a day). String data points, on the other hand, can't be aggregated by CDF but can store arbitrary information like states (e.g., “open”/”closed”) or more complex information (JSON).

Cognite stores discrete data points, but the underlying process measured by the data points can vary continuously. When interpolating between data points, we can either assume that each value stays the same until the next measurement or linearly changes between the two measurements. The isStep flag controls this on the time series object. For example, if we estimate the average over a time containing two data points, the average will either be close to the first (isStep) or close to the mean of the two (not isStep).

A data point stores a single piece of information, a number or a string, associated with a specific time. Data points are identified by their timestamps, measured in milliseconds since the unix epoch -- 00:00:00.000, January 1st, 1970. The time series service accepts timestamps in the range from 00:00:00.000, January 1st, 1900 through 23:59:59.999, December 31st, 2099 (in other words, every millisecond in the two centuries from 1900 to but not including 2100). Negative timestamps are used to define dates before 1970. Milliseconds is the finest time resolution supported by CDF, i.e., fractional milliseconds are not supported. Leap seconds are not counted.

Numerical data points can be aggregated before they are retrieved from CDF. This allows for faster queries by reducing the amount of data transferred. You can aggregate data points by specifying one or more aggregates (e.g., average, minimum, maximum) as well as the time granularity over which the aggregates should be applied (e.g., “1h” for one hour).

Aggregates are aligned to the start time modulo the granularity unit. For example, if you ask for daily average temperatures since Monday afternoon last week, the first aggregated data point will contain averages for Monday, the second for Tuesday, etc. Determining aggregate alignment without considering data point timestamps allows CDF to pre-calculate aggregates (e.g., to quickly return daily average temperatures for a year). Consequently, aggregating over 60 minutes can return a different result than aggregating over 1 hour because the two queries will be aligned differently. Asset references obtained from a time series - through its asset ID - may be invalid simply by the non-transactional nature of HTTP. They are maintained in an eventually consistent manner.

Aggregate time series

The aggregation API allows you to compute aggregated results from a set of time series, such as getting the number of time series in a project or checking what assets the different time series in your project are associated with (along with the number of time series for each asset). By specifying filter and/or advancedFilter, the aggregation will take place only over those time series that match the filters. filter and advancedFilter behave the same way as in the list endpoint.

The default behavior, when the aggregate field is not specified the request body, is to return the number of time series that match the filters (if any), which is the same behavior as when the aggregate field is set to count.

The following requests will both return the total number of time series whose name begins with pump:

{
  "advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}

and

{
  "aggregate": "count",
  "advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}

The response might be:

{"items": [{"count": 42}]}
Setting aggregate to uniqueValues and specifying a property in properties (this field is an array, but currently only supports one property) will return all unique values (up to a maximum of 1000) that are taken on by that property across all the time series that match the filters, as well as the number of time series that have each of those property values.

This example request finds all the unique asset ids that are referenced by the time series in your project whose name begins with pump:

{
  "aggregate": "uniqueValues",
  "properties": [{"property": ["assetId"]}],
  "advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}

The response might be the following, saying that 23 time series are associated with asset 18 and 107 time series are associated with asset 76:

{
  "items": [
    {"values": ["18"], "count": 23},
    {"values": ["76"], "count": 107}
  ]
}
Setting aggregate to cardinalityValues will instead return the approximate number of distinct values that are taken on by the given property among the matching time series.

Example request:

{
  "aggregate": "cardinalityValues",
  "properties": [{"property": ["assetId"]}],
  "advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}

The result is likely exact when the set of unique values is small. In this example, there are likely two distinct asset ids among the matching time series:

{"items": [{"count": 2}]}
Setting aggregate to uniqueProperties will return the set of unique properties whose property path begins with path (which can currently only be ["metadata"]) that are contained in the time series that match the filters.

Example request:

{
  "aggregate": "uniqueProperties",
  "path": ["metadata"],
  "advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}

The result contains all the unique metadata keys in the time series whose name begins with pump, and the number of time series that contains each metadata key:

{
  "items": [
    {"values": [{"property": ["metadata", "tag"]}], "count": 43},
    {"values": [{"property": ["metadata", "installationDate"]}], "count": 97}
  ]
}
Setting aggregate to cardinalityProperties will instead return the approximate number of different property keys whose path begins with path (which can currently only be ["metadata"], meaning that this can only be used to count the approximate number of distinct metadata keys among the matching time series).

Example request:

{
  "aggregate": "cardinalityProperties",
  "path": ["metadata"],
  "advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}

The result is likely exact when the set of unique values is small. In this example, there are likely two distinct metadata keys among the matching time series:

{"items": [{"count": 2}]}

The aggregateFilter field may be specified if aggregate is set to cardinalityProperties or uniqueProperties. The structure of this field is similar to that of advancedFilter, except that the set of leaf filters is smaller (in, prefix, and range), and that none of the leaf filters specify a property. Unlike advancedFilter, which is applied before the aggregation (in order to restrict the set of time series that the aggregation operation should be applied to), aggregateFilter is applied after the initial aggregation has been performed, in order to restrict the set of results.

Click here for more details about aggregateFilter.

When aggregate is set to uniqueProperties, the result set contains a number of property paths, each with an associated count that shows how many time series contained that property (among those time series that matched the filter and advancedFilter, if they were specified) . If aggregateFilter is specified, it will restrict the property paths included in the output. Let us add an aggregateFilter to the uniqueProperties example from above:

{
  "aggregate": "uniqueProperties",
  "path": ["metadata"],
  "advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}},
  "aggregateFilter": {"prefix": {"value": "t"}}
}

Now, the result only contains those metadata properties whose key begins with t (but it will be the same set of metadata properties that begin with t as in the original query without aggregateFilter, and the counts will be the same):

{
  "items": [
    {"values": [{"property": ["metadata", "tag"]}], "count": 43}
  ]
}

Similarly, adding aggregateFilter to cardinalityProperties will return the approximate number of properties whose property key matches aggregateFilter from those time series matching the filter and advancedFilter (or from all time series if neither filter nor aggregateFilter are specified):

{
  "aggregate": "cardinalityProperties",
  "path": ["metadata"],
  "advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}},
  "aggregateFilter": {"prefix": {"value": "t"}}
}

As we saw above, only one property matches:

{"items": [{"count": 1}]}

Note that aggregateFilter is also accepted when aggregate is set to cardinalityValues or cardinalityProperties. For those aggregations, the effect of any aggregateFilter could also be achieved via a similar advancedFilter. However, aggregateFilter is not accepted when aggregate is omitted or set to count.

Rate and concurrency limits

Rate and concurrency limits apply this endpoint. If a request exceeds one of the limits, it will be throttled with a 429: Too Many Requests response. More on limit types and how to avoid being throttled is described here.

Limit Per project Per user (identity)
Rate 15 requests per second 10 requests per second
Concurrency 6 concurrent requests 4 concurrent requests
Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

Aggregates the time series that match the given criteria.

One of
(Boolean filter (and (object) or or (object) or not (object))) or (Leaf filter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object))) (TimeSeriesFilterLanguage)

A filter DSL (Domain Specific Language) to define advanced filter queries.

At the top level, an advancedFilter expression is either a single Boolean filter or a single leaf filter. Boolean filters contain other Boolean filters and/or leaf filters. The total number of filters may be at most 100, and the depth (the greatest number of times filters have been nested inside each other) may be at most 10. The search leaf filter may at most be used twice within a single advancedFilter, but all other filters can be used as many times as you like as long as the other limits are respected.

object (Filter)
(Boolean filter (and (object) or or (object) or not (object))) or (Leaf filter (in (object) or range (object) or prefix (object))) (TimeSeriesAggregateFilter)

A filter DSL (Domain Specific Language) to define aggregate filters.

aggregate
string
Value: "count"

The count aggregation gets the number of time series that match the filter(s). This is the default aggregation, which will also be applied if aggregate is not specified.

Responses

Request samples

Content type
application/json
Example
{
  • "advancedFilter": {
    • "or": [
      ]
    },
  • "filter": {
    • "name": "string",
    • "unit": "string",
    • "isString": true,
    • "isStep": true,
    • "metadata": {
      },
    • "assetIds": [
      ],
    • "assetExternalIds": [
      ],
    • "rootAssetIds": [
      ],
    • "assetSubtreeIds": [
      ],
    • "dataSetIds": [
      ],
    • "externalIdPrefix": "my.known.prefix",
    • "createdTime": {
      },
    • "lastUpdatedTime": {
      }
    },
  • "aggregateFilter": {
    • "and": [
      ]
    },
  • "aggregate": "count"
}

Response samples

Content type
application/json
Example
{
  • "items": [
    • {
      }
    ]
}

Create time series

Creates one or more time series.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json
required
Array of objects (PostTimeSeriesMetadataDTO) [ 1 .. 1000 ] items

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Delete data points

Delete data points from time series.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

The list of delete requests to perform.

required
Array of QueryWithInternalId (object) or QueryWithExternalId (object) (DatapointsDeleteRequest) [ 1 .. 10000 ] items

List of delete filters.

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{ }

Delete time series

Deletes the time series with the specified IDs and their data points.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

Specify a list of the time series to delete.

required
Array of QueryWithInternalId (object) or QueryWithExternalId (object) [ 1 .. 1000 ] items unique

List of ID objects.

ignoreUnknownIds
boolean
Default: false

Ignore IDs and external IDs that are not found

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "ignoreUnknownIds": false
}

Response samples

Content type
application/json
{ }

Filter time series

Retrieves a list of time series that match the given criteria.

Advanced filtering

The advancedFilter field lets you create complex filtering expressions that combine simple operations, such as equals, prefix, and exists, by using the Boolean operators and, or, and not. Filtering applies to basic fields as well as metadata. See the advancedFilter syntax in the request example.

Supported leaf filters

Leaf filter Supported fields Description and example
containsAll Array type fields Only includes results which contain all of the specified values.
{"containsAll": {"property": ["property"], "values": [1, 2, 3]}}
containsAny Array type fields Only includes results which contain at least one of the specified values.
{"containsAny": {"property": ["property"], "values": [1, 2, 3]}}
equals Non-array type fields Only includes results that are equal to the specified value.
{"equals": {"property": ["property"], "value": "example"}}
exists All fields Only includes results where the specified property exists (has a value).
{"exists": {"property": ["property"]}}
in Non-array type fields Only includes results that are equal to one of the specified values.
{"in": {"property": ["property"], "values": [1, 2, 3]}}
prefix String type fields Only includes results which start with the specified text.
{"prefix": {"property": ["property"], "value": "example"}}
range Non-array type fields Only includes results that fall within the specified range.
{"range": {"property": ["property"], "gt": 1, "lte": 5}}
Supported operators: gt, lt, gte, lte
search ["name"] and ["description"] Introduced to provide functional parity with the /timeseries/search endpoint.
{"search": {"property": ["property"], "value": "example"}}

Supported properties

Property Type
["description"] string
["externalId"] string
["metadata", "<someCustomKey>"] string
["name"] string
["unit"] string
["assetId"] number
["assetRootId"] number
["createdTime"] number
["dataSetId"] number
["id"] number
["lastUpdatedTime"] number
["isStep"] Boolean
["isString"] Boolean
["accessCategories"] array of strings
["securityCategories"] array of numbers

Limits

  • Filter query max depth: 10.
  • Filter query max number of clauses: 100.
  • and and or clauses must have at least one element (and at most 99, since each element counts towards the total clause limit, and so does the and/or clause itself).
  • The property array of each leaf filter has the following limitations:
    • Number of elements in the array is 1 or 2.
    • Elements must not be null or blank.
    • Each element max length is 256 characters.
    • The property array must match one of the existing properties (static top-level property or dynamic metadata property).
  • containsAll, containsAny, and in filter values array size must be in the range [1, 100].
  • containsAll, containsAny, and in filter values array must contain elements of number or string type (matching the type of the given property).
  • range filter must have at lest one of gt, gte, lt, lte attributes. But gt is mutually exclusive to gte, while lt is mutually exclusive to lte.
  • gt, gte, lt, lte in the range filter must be of number or string type (matching the type of the given property).
  • search filter value must not be blank, and the length must be in the range [1, 128], and there may be at most two search filters in the entire filter query.
  • The maximum length of the value of a leaf filter that is applied to a string property is 256.

Sorting

By default, time series are sorted by their creation time in ascending order. Sorting by another property or by several other properties can be explicitly requested via the sort field, which must contain a list of one or more sort specifications. Each sort specification indicates the property to sort on and, optionally, the order in which to sort (defaults to asc). If multiple sort specifications are supplied, the results are sorted on the first property, and those with the same value for the first property are sorted on the second property, and so on.
Partitioning is done independently of sorting; there is no guarantee of sort order between elements from different partitions.

Null values

In case the nulls field has the auto value, or the field isn't specified, null (missing) values are considered bigger than any other values. They are placed last when sorting in the asc order and first in the desc order. Otherwise, missing values are placed according to the nulls field (last or first), and their placement won't depend on the order field. Note that the number zero, empty strings, and empty lists are all considered not null.

Example

{
  "sort": [
    {
      "property" : ["createdTime"],
      "order": "desc",
      "nulls": "last"
    },
    {
      "property" : ["metadata", "<someCustomKey>"]
    }
  ]
}

Properties

You can sort on the following properties:

Property
["assetId"]
["createdTime"]
["dataSetId"]
["description"]
["externalId"]
["lastUpdatedTime"]
["metadata", "<someCustomKey>"]
["name"]

Limits

The sort array must contain 1 to 2 elements.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json
object (Filter)
(Boolean filter (and (object) or or (object) or not (object))) or (Leaf filter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object))) (TimeSeriesFilterLanguage)

A filter DSL (Domain Specific Language) to define advanced filter queries.

At the top level, an advancedFilter expression is either a single Boolean filter or a single leaf filter. Boolean filters contain other Boolean filters and/or leaf filters. The total number of filters may be at most 100, and the depth (the greatest number of times filters have been nested inside each other) may be at most 10. The search leaf filter may at most be used twice within a single advancedFilter, but all other filters can be used as many times as you like as long as the other limits are respected.

limit
integer <int32> [ 1 .. 1000 ]
Default: 100

Return up to this many results.

cursor
string
partition
string (Partition)

Splits the data set into N partitions. The attribute is specified as a "M/N" string, where M is a natural number in the interval of [1, N]. You need to follow the cursors within each partition in order to receive all the data.

To prevent unexpected problems and maximize read throughput, you should at most use 10 (N <= 10) partitions.

When using more than 10 partitions, CDF may reduce the number of partitions silently. For example, CDF may reduce the number of partitions to K = 10 so if you specify an X/N partition value where X = 8 and N = 20 - i.e. "partition": "8/20"- then CDF will change N to N = K = 10 and process the request. But if you specify the X/N partition value where X = 11 (X > K) and N = 20 - i.e. "partition": "11/20"- then CDF will reply with an empty result list and no cursor in the response.

In future releases of the resource APIs, Cognite may reject requests if you specify more than 10 partitions. When Cognite enforces this behavior, the requests will result in a 400 Bad Request status.

Array of objects (TimeSeriesSortItem) [ 1 .. 2 ] items

Sort by array of selected properties.

Responses

Request samples

Content type
application/json
{
  • "filter": {
    • "name": "string",
    • "unit": "string",
    • "isString": true,
    • "isStep": true,
    • "metadata": {
      },
    • "assetIds": [
      ],
    • "assetExternalIds": [
      ],
    • "rootAssetIds": [
      ],
    • "assetSubtreeIds": [
      ],
    • "dataSetIds": [
      ],
    • "externalIdPrefix": "my.known.prefix",
    • "createdTime": {
      },
    • "lastUpdatedTime": {
      }
    },
  • "advancedFilter": {
    • "or": [
      ]
    },
  • "limit": 100,
  • "cursor": "4zj0Vy2fo0NtNMb229mI9r1V3YG5NBL752kQz1cKtwo",
  • "partition": "1/10",
  • "sort": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "nextCursor": "string"
}

Insert data points

Insert data points into a time series. You can do this for multiple time series. If you insert a data point with a timestamp that already exists, it will be overwritten with the new value.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema:

The datapoints to insert.

required
Array of DatapointsWithInternalId (object) or DatapointsWithExternalId (object) (DatapointsPostDatapoint) [ 1 .. 10000 ] items

Responses

Request samples

Content type
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{ }

List time series

List time series. Use nextCursor to paginate through the results.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
query Parameters
limit
integer <int32> [ 1 .. 1000 ]
Default: 100

Limits the number of results to return. CDF returns a maximum of 1000 results even if you specify a higher limit.

includeMetadata
boolean
Default: true

Whether the metadata field should be returned or not.

cursor
string
Example: cursor=4zj0Vy2fo0NtNMb229mI9r1V3YG5NBL752kQz1cKtwo

Cursor for paging through results.

partition
string
Example: partition=1/10

Splits the data set into N partitions. The attribute is specified as a "M/N" string, where M is a natural number in the interval of [1, N]. You need to follow the cursors within each partition in order to receive all the data.

To prevent unexpected problems and maximize read throughput, you should at most use 10 (N <= 10) partitions.

When using more than 10 partitions, CDF may reduce the number of partitions silently. For example, CDF may reduce the number of partitions to K = 10 so if you specify an X/N partition value where X = 8 and N = 20 - i.e. "partition": "8/20"- then CDF will change N to N = K = 10 and process the request. But if you specify the X/N partition value where X = 11 (X > K) and N = 20 - i.e. "partition": "11/20"- then CDF will reply with an empty result list and no cursor in the response.

In future releases of the resource APIs, Cognite may reject requests if you specify more than 10 partitions. When Cognite enforces this behavior, the requests will result in a 400 Bad Request status.

assetIds
string <jsonArray(int64)> (JsonArrayInt64)
Example: assetIds=[363848954441724, 793045462540095, 1261042166839739]

Gets the time series related to the assets. The format is a list of IDs serialized as a JSON array(int64). Takes [ 1 .. 100 ] unique items.

rootAssetIds
string <jsonArray(int64)> (JsonArrayInt64)
Example: rootAssetIds=[363848954441724, 793045462540095, 1261042166839739]

Only includes time series that have a related asset in a tree rooted at any of these root assetIds.

externalIdPrefix
string (CogniteExternalIdPrefix) <= 255 characters
Example: externalIdPrefix=my.known.prefix

Filter by this (case-sensitive) prefix for the external ID.

Responses

Request samples

const timeseries = await client.timeseries.list({ filter: { assetIds: [1, 2] }});

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "nextCursor": "string"
}

Retrieve data points

Retrieves a list of data points from multiple time series in a project. This operation supports aggregation and pagination. Learn more about aggregation.

Note: when start isn't specified in the top level and for an individual item, it will default to epoch 0, which is 1 January, 1970, thus excluding potential existent data points before 1970. start needs to be specified as a negative number to get data points before 1970.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

Specify parameters to query for multiple data points. If you omit fields in individual data point query items, the top-level field values are used. For example, you can specify a default limit for all items by setting the top-level limit field. If you request aggregates, only the aggregates are returned. If you don't request any aggregates, all data points are returned.

required
Array of QueryWithInternalId (object) or QueryWithExternalId (object) (DatapointsQuery) [ 1 .. 100 ] items
integer or string (TimestampOrStringStart)

Get datapoints starting from, and including, this time. The format is N[timeunit]-ago where timeunit is w,d,h,m,s. Example: '2d-ago' gets datapoints that are up to 2 days old. You can also specify time in milliseconds since epoch. Note that for aggregates, the start time is rounded down to a whole granularity unit (in UTC timezone). Daily granularities (d) are rounded to 0:00 AM; hourly granularities (h) to the start of the hour, etc.

integer or string (TimestampOrStringEnd)

Get datapoints up to, but excluding, this point in time. Same format as for start. Note that when using aggregates, the end will be rounded up such that the last aggregate represents a full aggregation interval containing the original end, where the interval is the granularity unit times the granularity multiplier. For granularity 2d, the aggregation interval is 2 days, if end was originally 3 days after the start, it will be rounded to 4 days after the start.

limit
integer <int32>
Default: 100

Returns up to this number of data points. The maximum is 100000 non-aggregated data points and 10000 aggregated data points in total across all queries in a single request.

aggregates
Array of strings (Aggregate) [ 1 .. 10 ] items unique
Items Enum: "average" "max" "min" "count" "sum" "interpolation" "stepInterpolation" "totalVariation" "continuousVariance" "discreteVariance"

Specify the aggregates to return. Omit to return data points without aggregation.

granularity
string

The time granularity size and unit to aggregate over. Valid entries are 'day, hour, minute, second', or short forms 'd, h, m, s', or a multiple of these indicated by a number as a prefix. For 'second' and 'minute', the multiple must be an integer between 1 and 120 inclusive; for 'hour' and 'day', the multiple must be an integer between 1 and 100000 inclusive. For example, a granularity '5m' means that aggregates are calculated over 5 minutes. This field is required if aggregates are specified.

includeOutsidePoints
boolean
Default: false

Defines whether to include the last data point before the requested time period and the first one after. This option can be useful for interpolating data. It's not available for aggregates or cursors. Note: If there are more than limit data points in the time period, we will omit the excess data points and then append the first data point after the time period, thus causing a gap with omitted data points. When this is the case, we return up to limit+2 data points. When doing manual paging (sequentially requesting smaller intervals instead of requesting a larger interval and using cursors to get all the data points) with this field set to true, the start of the each subsequent request should be one millisecond more than the timestamp of the second-to-last data point from the previous response. This is because the last data point in most cases will be the extra point from outside the interval.

ignoreUnknownIds
boolean
Default: false

Ignore IDs and external IDs that are not found

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "start": 0,
  • "end": 0,
  • "limit": 100,
  • "aggregates": [
    • "average"
    ],
  • "granularity": "1h",
  • "includeOutsidePoints": false,
  • "ignoreUnknownIds": false
}

Response samples

Content type
{
  • "items": [
    • {
      }
    ]
}

Retrieve latest data point

Retrieves the latest data point in one or more time series. Note that the latest data point in a time series is the one with the highest timestamp, which is not necessarily the one that was ingested most recently.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

The list of the queries to perform.

required
Array of QueryWithInternalId (object) or QueryWithExternalId (object) (LatestDataBeforeRequest) [ 1 .. 100 ] items

List of latest queries

ignoreUnknownIds
boolean
Default: false

Ignore IDs and external IDs that are not found

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "ignoreUnknownIds": false
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Retrieve time series

Retrieves one or more time series by ID or external ID. The response returns the time series in the same order as in the request.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of the IDs of the time series to retrieve.

required
Array of QueryWithInternalId (object) or QueryWithExternalId (object) [ 1 .. 1000 ] items unique

List of ID objects.

ignoreUnknownIds
boolean
Default: false

Ignore IDs and external IDs that are not found

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "ignoreUnknownIds": false
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Search time series

Fulltext search for time series based on result relevance. Primarily meant for human-centric use cases, not for programs, since matching and order may change over time. Additional filters can also be specified. This operation does not support pagination.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json
object (Filter)
object (Search)
limit
integer <int32> [ 1 .. 1000 ]
Default: 100

Return up to this many results.

Responses

Request samples

Content type
application/json
{
  • "filter": {
    • "name": "string",
    • "unit": "string",
    • "isString": true,
    • "isStep": true,
    • "metadata": {
      },
    • "assetIds": [
      ],
    • "assetExternalIds": [
      ],
    • "rootAssetIds": [
      ],
    • "assetSubtreeIds": [
      ],
    • "dataSetIds": [
      ],
    • "externalIdPrefix": "my.known.prefix",
    • "createdTime": {
      },
    • "lastUpdatedTime": {
      }
    },
  • "search": {
    • "name": "string",
    • "description": "string",
    • "query": "some other"
    },
  • "limit": 100
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Update time series

Updates one or more time series. Fields outside of the request remain unchanged.

For primitive fields (those whose type is string, number, or boolean), use "set": value to update the value; use "setNull": true to set the field to null.

For JSON array fields (for example securityCategories), use "set": [value1, value2] to update the value; use "add": [value1, value2] to add values; use "remove": [value1, value2] to remove values.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of changes.

required
Array of TimeSeriesUpdateById (object) or TimeSeriesUpdateByExternalId (object) (TimeSeriesUpdate) [ 1 .. 1000 ] items

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Synthetic Time Series

Synthetic Time Series (STS) is a way to combine various input time series, constants and operators, to create completely new time series.

For example can we use the expression 24 * TS{externalId='production/hour'} to convert from hourly to daily production rates.

But STS is not limited to simple conversions.

  • We support combination of different time series TS{id=123} + TS{externalId='hei'}.
  • Functions of time series sin(pow(TS{id=123}, 2)).
  • Aggregations of time series TS{id=123, aggregate='average', granularity='1h'}+TS{id=456}

To learn more about synthetic time series please follow our guide.

Synthetic query

Execute an on-the-fly synthetic query

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

The list of queries to perform

required
Array of objects (SyntheticQuery) [ 1 .. 10 ] items

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Events

Event objects store complex information about multiple assets over a time period. For example, an event can describe two hours of maintenance on a water pump and some associated pipes, or a future time window where the pump is scheduled for inspection. This is in contrast with data points in time series that store single pieces of information about one asset at specific points in time (e.g., temperature measurements).

An event’s time period is defined by a start time and end time, both millisecond timestamps since the UNIX epoch. The timestamps can be in the future. In addition, events can have a text description as well as arbitrary metadata and properties.

Asset references obtained from an event - through asset ids - may be invalid, simply by the non-transactional nature of HTTP. They are maintained in an eventual consistent manner.

Rate and concurrency limits

Rate and concurrency limits apply to some of the endpoints. If a request exceeds one of the limits, it will be throttled with a 429: Too Many Requests response. More on limit types and how to avoid being throttled is described here.

Following limits apply to the List events, Filter events, Aggregate events and Search events endpoints. These limits apply to all endpoints simultaneously, i.e. requests made to different endpoints are counted together. Please note the additional conditions that apply to the Aggregate events endpoint, as this endpoint provides the most resource-consuming operations.

Limit Per project Per user (identity)
Rate 30 rps total
out of which no more than 15 rps to Aggregate
20 rps
out of which no more than 10 rps to Aggregate
Concurrency 15 parallel requests
out of which no more than 6 to Aggregate
10 parallel requests
out of which no more than 4 to Aggregate

Aggregate events

The aggregation API lets you compute aggregated results on events, such as getting the count of all Events in a project, checking different descriptions of events in your project, etc.

Aggregate filtering

Filter (filter & advancedFilter) data for aggregates

Filters behave the same way as for the Filter events endpoint. In text properties, the values are aggregated in a case-insensitive manner.

aggregateFilter to filter aggregate results

aggregateFilter works similarly to advancedFilter but always applies to aggregate properties. For instance, in an aggregation for the source property, only the values (aka buckets) of the source property can be filtered out.

Request throttling

This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage.
It is a subject of the new throttling schema (limited request rate and concurrency). Please check Events resource description for more information.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json
One of
aggregate
string
Value: "count"

Type of aggregation to apply. count: Get an approximate number of Events matching the filters.

(BoolFilter (and (object) or or (object) or not (object))) or (LeafFilter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object))) (EventAdvancedFilter)

A filter DSL (Domain Specific Language) to define advanced filter queries.

See more information about filtering DSL here.

Supported properties:

Property Type
["assetIds"] array of [number]
["createdTime"] number
["dataSetId"] number
["endTime"] number
["id"] number
["lastUpdatedTime"] number
["startTime"] number
["description"] string
["externalId"] string
["metadata"] string
["metadata", "someCustomKey"] string
["source"] string
["subtype"] string
["type"] string

Note: Filtering on the ["metadata"] property has the following logic: If a value of any metadata keys in an event matches the filter, the event matches the filter.

object (EventFilter)

Filter on events filter with exact match

Responses

Request samples

Content type
application/json
Example
{
  • "aggregate": "count",
  • "advancedFilter": {
    • "or": [
      ]
    },
  • "filter": {
    • "startTime": {
      },
    • "endTime": {
      },
    • "activeAtTime": {
      },
    • "metadata": {
      },
    • "assetIds": [
      ],
    • "assetExternalIds": [
      ],
    • "assetSubtreeIds": [
      ],
    • "dataSetIds": [
      ],
    • "source": "string",
    • "type": "string",
    • "subtype": "string",
    • "createdTime": {
      },
    • "lastUpdatedTime": {
      },
    • "externalIdPrefix": "my.known.prefix"
    }
}

Response samples

Content type
application/json
Example
{
  • "items": [
    • {
      }
    ]
}

Create events

Creates multiple event objects in the same project. It is possible to post a maximum of 1000 events per request.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of events to be posted. It is possible to post a maximum of 1000 events per request.

required
Array of objects (ExternalEvent) [ 1 .. 1000 ] items

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Response samples

Content type
application/json
{
  • "items": [
    • {
      }
    ]
}

Delete events

Deletes events with the given ids. A maximum of 1000 events can be deleted per request.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json

List of IDs to delete.

required
Array of InternalId (object) or ExternalId (object) (EitherId) [ 1 .. 1000 ] items
ignoreUnknownIds
boolean
Default: false

Ignore IDs and external IDs that are not found

Responses

Request samples

Content type
application/json
{
  • "items": [
    • {
      }
    ],
  • "ignoreUnknownIds": false
}

Response samples

Content type
application/json
{ }

Filter events

Retrieve a list of events in the same project. This operation supports pagination by cursor. Apply Filtering and Advanced filtering criteria to select a subset of events.

Advanced filtering

Advanced filter lets you create complex filtering expressions that combine simple operations, such as equals, prefix, exists, etc., using boolean operators and, or, and not. It applies to basic fields as well as metadata.

See the advancedFilter attribute in the example.

See more information about filtering DSL here.

Supported leaf filters

Leaf filter Supported fields Description
containsAll Array type fields Only includes results which contain all of the specified values.
{"containsAll": {"property": ["property"], "values": [1, 2, 3]}}
containsAny Array type fields Only includes results which contain at least one of the specified values.
{"containsAny": {"property": ["property"], "values": [1, 2, 3]}}
equals Non-array type fields Only includes results that are equal to the specified value.
{"equals": {"property": ["property"], "value": "example"}}
exists All fields Only includes results where the specified property exists (has value).
{"exists": {"property": ["property"]}}
in Non-array type fields Only includes results that are equal to one of the specified values.
{"in": {"property": ["property"], "values": [1, 2, 3]}}
prefix String type fields Only includes results which start with the specified value.
{"prefix": {"property": ["property"], "value": "example"}}
range Non-array type fields Only includes results that fall within the specified range.
{"range": {"property": ["property"], "gt": 1, "lte": 5}}
Supported operators: gt, lt, gte, lte
search ["description"] Introduced to provide functional parity with /events/search endpoint.
{"search": {"property": ["property"], "value": "example"}}

The search leaf filter provides functional parity with the /events/search endpoint. It's available only for the ["description"] field. When specifying only this filter with no explicit ordering, behavior is the same as of the /events/search/ endpoint without specifying filters. Explicit sorting overrides the default ordering by relevance. It's possible to use the search leaf filter as any other leaf filter for creating complex queries.

See the search filter in the advancedFilter attribute in the example.

advancedFilter attribute limits

  • filter query max depth: 10
  • filter query max number of clauses: 100
  • and and or clauses must have at least one element
  • property array of each leaf filter has the following limitations:
    • number of elements in the array is in the range [1, 2]
    • elements must not be blank
    • each element max length is 128 symbols
    • property array must match one of the existing properties (static or dynamic metadata)
  • containsAll, containsAny, and in filter values array size must be in the range [1, 100]
  • containsAll, containsAny, and in filter values array must contain elements of a primitive type (number, string)
  • range filter must have at least one of gt, gte, lt, lte attributes. But gt is mutually exclusive to gte, while lt is mutually exclusive to lte. For metadata, both upper and lower bounds must be specified.
  • gt, gte, lt, lte in the range filter must be a primitive value
  • search filter value must not be blank and the length must be in the range [1, 128]
  • filter query may have maximum 2 search leaf filters
  • maximum leaf filter string value length is different depending on the property the filter is using:
    • externalId - 255
    • description - 128 for the search filter and 255 for other filters
    • type - 64
    • subtype - 64
    • source - 128
    • any metadata key - 128

Sorting

By default, events are sorted by their creation time in the ascending order. Use the search leaf filter to sort the results by relevance. Sorting by other fields can be explicitly requested. The order field is optional and defaults to desc for _score_ and asc for all other fields. The nulls field is optional and defaults to auto. auto is translated to last for the asc order and to first for the desc order by the service. Partitions are done independently of sorting: there's no guarantee of the sort order between elements from different partitions.

See the sort attribute in the example.

Null values

In case the nulls attribute has the auto value or the attribute isn't specified, null (missing) values are considered to be bigger than any other values. They are placed last when sorting in the asc order and first when sorting in desc. Otherwise, missing values are placed according to the nulls attribute (last or first), and their placement doesn't depend on the order value. Values, such as empty strings, aren't considered as nulls.

Sorting by score

Use a special sort property _score_ when sorting by relevance. The more filters a particular event matches, the higher its score is. This can be useful, for example, when building UIs. Let's assume we want exact matches to be be displayed above matches by prefix as in the request below. An event with the type fire will match both equals and prefix filters and, therefore, have higher score than events with names like fire training that match only the prefix filter.

"advancedFilter" : {
  "or" : [
    {
      "equals": {
        "property": ["type"], 
        "value": "fire"
      }
    },
    {
      "prefix": {
        "property": ["type"], 
        "value": "fire"
      }
    }
  ]
},
"sort": [
  {
    "property" : ["_score_"]
  }
]

Request throttling

This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage. It is a subject of the new throttling schema (limited request rate and concurrency). Please check Events resource description for more information.

Authorizations:
oidc-tokenoauth2-client-credentialsoauth2-open-industrial-dataoauth2-auth-code
Request Body schema: application/json
object (EventFilter)

Filter on events filter with exact match

(BoolFilter (and (object) or or (object) or not (object))) or (LeafFilter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object))) (EventAdvancedFilter)

A filter DSL (Domain Specific Language) to define advanced filter queries.

See more information about filtering DSL here.

Supported properties:

Property Type
["assetIds"] array of [number]
["createdTime"] number
["dataSetId"] number
["endTime"] number
["id"] number
["lastUpdatedTime"] number
["startTime"] number
["description"] string
["externalId"] string
["metadata"] string
["metadata", "someCustomKey"] string
["source"] string
["subtype"] string
["type"] string

Note: Filtering on the ["metadata"] property has the following logic: If a value of any metadata keys in an event matches the filter, the event matches the filter.

limit
integer <int32> [ 1 .. 1000 ]
Default: 100

Limits the maximum number of results to be returned by a single request. In case there are more results to the request, the 'nextCursor' attribute will be provided as part of the response. Request may contain less results than the request limit.

Array of modern (objects) or Array of deprecated (strings)
cursor
string
partition
string (Partition)
<