Download OpenAPI specification:Download
This is the reference documentation for the Cognite API with an overview of all the available methods.
Most resource types can be paginated, indicated by the field nextCursor
in the response.
By passing the value of nextCursor
as the cursor you will get the next page of limit
results.
Note that all parameters except cursor
has to stay the same.
If you want to download a lot of resources (let's say events), paginating through millions of records can be slow.
We support parallel retrieval through the partition
parameter, which has the format m/n
where n
is the amount of partitions you would like to split the entire data set into.
If you want to download the entire data set by splitting it into 10 partitions, do the following in parallel with m
running from 1 to 10:
/events
with partition=m/10
.partition
parameter needs to be passed to all subqueries.
Processing of parallel retrieval requests is subject to concurrency quota availability. The request returns the 429
response upon exceeding concurrency limits. See the Request throttling chapter below.To prevent unexpected problems and maximize read throughput, you should at most use 10 partitions.
Some CDF resources will automatically enforce a maximum of 10 partitions.
For more specific and detailed information, please read the partition
attribute documentation for the CDF resource you're using.
Cognite Data Fusion (CDF) returns the HTTP 429
(too many requests) response status code when project capacity exceeds the limit.
The throttling can happen:
Cognite recommends using a retry strategy based on truncated exponential backoff to handle sessions with HTTP response codes 429.
Cognite recommends using a reasonable number (up to 10) of Parallel retrieval
partitions.
Following these strategies lets you slow down the request frequency to maximize productivity without having to re-submit/retry failing requests.
See more here.
This API uses calendar versioning, and version names follow the YYYYMMDD
format.
You can find the versions currently available by using the version selector at the top of this page.
To use a specific API version, you can pass the cdf-version: $version
header along with your requests to the API.
The beta versions provide a preview of what the stable version will look like in the future. Beta versions contain functionality that is reasonably mature, and highly likely to become a part of the stable API.
Beta versions are indicated by a -beta
suffix after the version name. For example, the beta version header for the
2023-01-01 version is then cdf-version: 20230101-beta
.
Alpha versions contain functionality that is new and experimental, and not guaranteed to ever become a part of the stable API. This functionality presents no guarantee of service, so its use is subject to caution.
Alpha versions are indicated by an -alpha
suffix after the version name. For example, the alpha version header for
the 2023-01-01 version is then cdf-version: 20230101-alpha
.
Identity providers (IdP) are required to be compatible with the OpenID Connect Discovery 1.0 standard, and compliance will now be enforced by the Projects API.
oidcConfiguration.jwksUrl
and oidcConfiguration.tokenUrl
can be entirely omitted when updating the OIDC configuration for a project.oidcConfiguration.jwksUrl
and oidcConfiguration.tokenUrl
are preserved for backwards compatibility of the API. However, if these are specified as part of the request body, the value must match excatly the values that are specified in the OpenID provider configuration document for the configured issuer (can be found at https://{issuer-url}/.well-known/openid-configuration
). If the values does not match, the API will return an error message.The oidcConfiguration.skewMs
has been deprecated but remains part of the API for backwards compatibility. It can be omitted from the request. If included, it must always be set to 0
.
The oidcConfiguration.isGroupCallbackEnabled
has been deprecated but remains part of the API for backwards compatibility. It can be omitted from the request.
true
.autoCreateDirectRelations
option on the endpoint for ingesting instances.
This option lets the user specify whether to create missing target nodes of direct relations.sources
field on the /instances/byids
endpoint.image.InstanceLink
and diagrams.InstanceLink
annotation types to allow you to link from objects discovered in images and engineering diagrams to data model instances.Fixed the API documentation for the request body of the POST /projects/{project}/sessions/byids endpoint.
The documentation incorrectly stated the request body schema as specifying the list of session IDs to retrieve, in the form {"items": [42]}
- it should in fact be {"items": [{"id": 42}]}
. The documentation has been updated to reflect this.
Fixed the API documentation for the response body of the POST /projects/{project}/sessions/byids endpoint.
The documentation incorrectly stated nextCursor
and previousCursor
fields as being returned from the response, which was not the case, and these fields have now been removed from the API documentation.
nodes
and edges
.highlight
field in the search
endpoint to indicate whether matches in search results should be highlighted.We've removed authentication via CDF service accounts and API keys, and user sign-in via /login
.
POST /documents/aggregate
endpoint. The endpoint allows you to count documents optionally grouped by a property and also to retrieve all unique values of a property.POST /documents/list
endpoint. The endpoint allows you to iterate through all the documents in a project.POST /documents/{documentId}/content
endpoint. The endpoint lets you download the entire extracted plain text of a document.isStep
parameter to be editable (i.e. removed description stating it is not updatable) in POST /timeseries/create.isStep
parameter to the TimeSeriesPatch
object used in POST /timeseries/updateignoreUnknownIds
parameter to POST /sequences/delete. Setting this to true will prevent the operation from failing if one or more of the given sequences do not exist; instead, those given sequences that do exist will be deleted.documentation
attribute that supports Markdown (rendered as Markdown in Fusion).success
, failure
and seen
. They enable extractor developers to report status and error message after ingesting data. As well enables for reporting heartbeat through seen
status by the extractor to easily identify issues related to crushed applications and scheduling issues.partition
parameter to the GET /sequences endpoint to support parallel retrieval.partition
parameter to the GET /timeseries endpoint to support parallel retrieval.Added sessions to v1. Sessions let you securely delegate access to CDF resources for CDF services (such as Functions) by an external principal and for an extended time.
remove
columns, modify
existing columns, and add
new columns as well.You can now ask for a granularity of up to 100000 hours (previously 48 hours), both in normal aggregates and in synthetic time series.
We are deprecating authentication via CDF service accounts and API keys, and user sign-in via /login
, in favor of registering applications and services with your IdP (identity provider) and using OpenID Connect and the IdP framework to manage CDF access securely.
The legacy authentication flow is available for customers using Cognite Data Fusion (CDF) on GCP until further notice. We strongly encourage customers to adopt the new authentication flows as soon as possible.
The following API endpoints are deprecated:
/api/v1/projects/*/apikeys
/api/v1/projects/*/serviceaccounts
/login
/logout
/api/v1/projects/*/groups/serviceaccounts
**only the sub-resources for listing, adding, and removing members of groups.
CDF API 0.5, 0.6 reached their end-of-life after its initial deprecation announcement in Summer 2019.
partition
parameter to the List 3D Nodes endpoint for supporting parallel requests.sortByNodeId
parameter to the List 3D Nodes endpoint, improving request latency in most cases if set to true
.status
shall be capitalized string.fileType
inside derivedFields
to refer to a pre-defined subset of MIME types.fileType
inside derivedFields
to find files with a pre-defined subset of MIME types.geoLocation
to refer to the geographic location of the file.geoLocation
to find files matching a certain geographic location.To learn how to leverage new geoLocation features, follow our guide.
directory
referring to the directory in the source containing the file.directoryPrefix
allows you to find Files matching a certain directory prefix.labels
allows you to attach labels to Files upon creation or updating.labels
allows you to find Files that have been annotated with specific labels.applicationDomains
. If this field is set, users only sign in to the project through applications hosted on
a whitelisted domain. Read more.uniqueValues
allows you to find different types, subtypes of events in your project.labels
allows you to find resources that have been annotated with specific labels.endTime=null
.datasetId
introduced in assets, files, events, time series and sequences.dataSetIds
allows you to narrow down results to resources containing datasetId
by a list of ids or externalIds of a data set. Supported by assets, files, events, time series and sequences.count
.datasetsAcl
for managing access to data set resources.datasetScope
for assets, files, events, time series and sequences ACLs. Allows you to scope down access to resources contained within a specified set of data sets.count
.count
.depth
and path
. You can use the properties in the filter and retrieve endpoints.parentExternalId
which is returned for all assets which have a parent with a defined externalId
.assetSubtreeIds
as a parameter to filter, search, and list endpoints for all core resources. assetSubtreeIds
allows you to specify assets that are subtree roots, and then only retrieve resources that are related to assets within those subtrees.search.query
parameter. This uses an improved search algorithm that tries a wider range of variations of the input terms and gives much better relevancy ranking than the existing search.name
and search.description
fields.search.query
parameter for time series search now uses an improved search algorithm that tries a wider range of variations of the input terms, and gives much better relevancy ranking.mimeType
for existing files in files/update requests.Time series expanded their filtering capabilities with new Filter time series
endpoint, allowing for additional filtering by:
Endpoint in addition support pagination and partitioning. Check out detailed API documentation here.
externalId
and metadata support. Read more here.rootAssetIds
in files GET /files (using query parameter) and POST /files/list (in request body).partition
in /assets
and /events
to support parallel retrieval. See guide for usage hereintersectsBoundingBox
to the list asset mappings endpoint. The parameter filters asset mappings to the assets where the bounding box intersects (or is contained within) the specified bounding box.rootAssetIds
to list time series endpoint. Returns time series that are linked to an asset that has one of the root assets as an ancestor.List of changes for initial API v1 release in comparison to previous version - API 0.5
externalId
added across resource types. externalId
lets you define a unique ID for a data object. Learn more: External IDsexternalIdPrefix
added as a parameter to the list events, assets and files operations.data
object.limit
, cursor
and nextCursor
parameters.limit
parameter no longer implicitly rounds down requested page size to maximum page size.sourceId
field has been removed from resources. Use externalId
instead of sourceId
+source
to define unique IDs for data objects.offset
and previousCursor
parameters are no longer supported for pagination across resources.root
filter.rootId
field to specify the top element in an asset hierarchy.rootIds
.rootIds
.name
property.boostName
has been removed from the search for assets operation.path
and depth
fields.rootAssetIds
allows for narrowing down events belonging only to list or specified root assets. Supported by Filter and Search APIassetIds
in list files operations now support multiple assets in the same request.fileType
field to mimeType
. The field now requires a MIME formatted string (e.g. text/plain
).uploadedAt
field to uploadedTime
.name
or mimeType
of a file through the update multiple files operation is no longer supported.id
and externalId
of time series. Adding datapoints to time series by name
has been removed.externalId
attribute for time series.externalId
during creation of time series. ExternalId
requires uniqueness across time series.id
and externalId
of the time series.legacyName
on time series creation. Value is required to be unique.id
and externalId
lookup as well retrieval for multiple time series within the same request.id
and externalId
.id
and externalId
. Selecting by name
is no longer available.externalId
.externalId
.boostName
has been removed from search operation.name
have been removed as names are no longer unique identifiers.name
is no longer available.isString
and isStep
attributes is removed. The attributes are not intended to be modified after creation of time series.id
. Use the update multiple time series endpoint instead.name
has been removed. Use externalId
instead.id
from a single time series has been removed. Use retrieve multiple datapoints for multiple time series instead.name
has been removed.name
has been removed.apiKeyId
), if the request used an API key.userId
attribute to serviceAccountId
.permissions
and source
attributes.Projects are used to isolate data in CDF from each other. All objects in CDF belong to a single project, and objects in different projects are generally isolated from each other.
Creates new projects given project details. This functionality is currently only available for Cognite and re-sellers of Cognite Data Fusion. Please contact Cognite Support for more information.
List of new project specifications
required | Array of objects (NewProjectSpec) |
{- "items": [
- {
- "name": "Open Industrial Data",
- "urlName": "publicdata",
- "adminSourceGroupId": "b7c9a5a4-99c2-4785-bed3-5e6ad9a78603",
- "parentProjectUrlName": "administrative-project",
- "oidcConfiguration": {
- "jwksUrl": "string",
- "tokenUrl": "string",
- "issuer": "string",
- "audience": "string",
- "skewMs": 0,
- "accessClaims": [
- {
- "claimName": "string"
}
], - "scopeClaims": [
- {
- "claimName": "string"
}
], - "logClaims": [
- {
- "claimName": "string"
}
], - "isGroupCallbackEnabled": false,
- "identityProviderScope": "string"
}
}
]
}
{- "name": "Open Industrial Data",
- "urlName": "publicdata"
}
The list of all projects that the user has the 'list projects' capability in. The user may not have access to any resources in the listed projects, even if they have access to list the project itself.
{- "items": [
- {
- "urlName": "publicdata"
}
]
}
Retrieves information about a project given the project URL name.
projectName required | string Example: publicdata The CDF project name, equal to the project variable in the server URL. |
const projectInfo = await client.projects.retrieve('publicdata');
{- "name": "Open Industrial Data",
- "urlName": "publicdata",
- "defaultGroupId": 123871937,
- "authentication": {
- "validDomains": [
- "example.com",
- "google.com"
], - "applicationDomains": [
- "console.cognitedata.com",
- "cdfapplication.example.com"
]
}, - "oidcConfiguration": {
- "jwksUrl": "string",
- "tokenUrl": "string",
- "issuer": "string",
- "audience": "string",
- "skewMs": 0,
- "accessClaims": [
- {
- "claimName": "string"
}
], - "scopeClaims": [
- {
- "claimName": "string"
}
], - "logClaims": [
- {
- "claimName": "string"
}
], - "isGroupCallbackEnabled": false,
- "identityProviderScope": "string"
}
}
Updates the project configuration.
Warning: Updating a project will invalidate active sessions within that project.
projectName required | string Example: publicdata The CDF project name, equal to the project variable in the server URL. |
Object with updated project configuration.
required | object (ProjectUpdateObjectDTO) Contains the instructions on how to update the project. Note: azureADConfiguration, oidcConfiguration and oAuth2Configuration are mutually exclusive |
{- "update": {
- "name": {
- "set": "string"
}, - "defaultGroupId": {
- "set": 0
}, - "validDomains": {
- "set": [
- "string"
]
}, - "applicationDomains": {
- "set": [
- "string"
]
}, - "authenticationProtocol": {
- "set": "string"
}, - "azureADConfiguration": {
- "set": {
- "appId": "string",
- "appSecret": "string",
- "tenantId": "string",
- "appResourceId": "string"
}
}, - "oAuth2Configuration": {
- "set": {
- "loginUrl": "string",
- "logoutUrl": "string",
- "tokenUrl": "string",
- "clientId": "string",
- "clientSecret": "string"
}
}, - "oidcConfiguration": {
- "modify": {
- "jwksUrl": {
- "set": "string"
}, - "tokenUrl": {
- "set": "string"
}, - "issuer": {
- "set": "string"
}, - "audience": {
- "set": "string"
}, - "skewMs": {
- "set": 0
}, - "accessClaims": {
- "set": [
- {
- "claimName": "string"
}
]
}, - "scopeClaims": {
- "set": [
- {
- "claimName": "string"
}
]
}, - "logClaims": {
- "set": [
- {
- "claimName": "string"
}
]
}, - "isGroupCallbackEnabled": {
- "set": true
}, - "identityProviderScope": {
- "set": "string"
}
}
}
}
}
{- "name": "Open Industrial Data",
- "urlName": "publicdata",
- "defaultGroupId": 123871937,
- "authentication": {
- "validDomains": [
- "example.com",
- "google.com"
], - "applicationDomains": [
- "console.cognitedata.com",
- "cdfapplication.example.com"
]
}, - "oidcConfiguration": {
- "jwksUrl": "string",
- "tokenUrl": "string",
- "issuer": "string",
- "audience": "string",
- "skewMs": 0,
- "accessClaims": [
- {
- "claimName": "string"
}
], - "scopeClaims": [
- {
- "claimName": "string"
}
], - "logClaims": [
- {
- "claimName": "string"
}
], - "isGroupCallbackEnabled": false,
- "identityProviderScope": "string"
}
}
Groups are used to give principals the capabilities to access CDF resources. One principal can be a member in multiple groups and one group can have multiple members. Note that having more than 20 groups per principal is not supported and may result in login issues.
Creates one or more named groups, each with a set of capabilities.
List of groups to create.
required | Array of objects (GroupSpec) |
{- "items": [
- {
- "name": "Production Engineers",
- "sourceId": "b7c9a5a4-99c2-4785-bed3-5e6ad9a78603",
- "capabilities": [
- {
- "analyticsAcl": {
- "actions": [
- "READ"
], - "scope": {
- "all": { }
}
}
}
], - "metadata": {
- "property1": "string",
- "property2": "string"
}
}
]
}
{- "items": [
- {
- "name": "Production Engineers",
- "sourceId": "b7c9a5a4-99c2-4785-bed3-5e6ad9a78603",
- "capabilities": [
- {
- "analyticsAcl": {
- "actions": [
- "READ"
], - "scope": {
- "all": { }
}
}
}
], - "metadata": {
- "property1": "string",
- "property2": "string"
}, - "id": 0,
- "isDeleted": false,
- "deletedTime": 0
}
]
}
Deletes the groups with the given IDs.
List of group IDs to delete
items required | Array of integers <int64> non-empty unique [ items <int64 > ] |
{- "items": [
- 23872937137,
- 1238712837,
- 128371973
]
}
{ }
Retrieves a list of groups the asking principal a member of. Principals with groups:list capability can optionally ask for all groups in a project.
all | boolean Default: false Whether to get all groups, only available with the groups:list acl. |
const groups = await client.groups.list({ all: true });
{- "items": [
- {
- "name": "Production Engineers",
- "sourceId": "b7c9a5a4-99c2-4785-bed3-5e6ad9a78603",
- "capabilities": [
- {
- "analyticsAcl": {
- "actions": [
- "READ"
], - "scope": {
- "all": { }
}
}
}
], - "metadata": {
- "property1": "string",
- "property2": "string"
}, - "id": 0,
- "isDeleted": false,
- "deletedTime": 0
}
]
}
Manage security categories for a specific project. Security categories can be used to restrict access to a resource. Applying a security category to a resource means that only principals (users or service accounts) that also have this security category can access the resource. To learn more about security categories please read this page.
Creates security categories with the given names. Duplicate names in the request are ignored. If a security category with one of the provided names exists already, then the request will fail and no security categories are created.
List of categories to create
required | Array of objects (SecurityCategorySpecDTO) non-empty |
{- "items": [
- {
- "name": "Guarded by vendor x"
}
]
}
{- "items": [
- {
- "name": "Guarded by vendor x",
- "id": 0
}
]
}
Deletes the security categories that match the provided IDs. If any of the provided IDs does not belong to an existing security category, then the request will fail and no security categories are deleted.
List of security category IDs to delete.
items required | Array of integers <int64> non-empty unique [ items <int64 > ] |
{- "items": [
- 23872937137,
- 1238712837,
- 128371973
]
}
{ }
Retrieves a list of all security categories for a project.
sort | string Default: "ASC" Enum: "ASC" "DESC" Sort descending or ascending. |
cursor | string Cursor to use for paging through results. |
limit | integer <int32> <= 1000 Default: 25 Return up to this many results. Maximum is 1000. Default is 25. |
const securityCategories = await client.securityCategories.list({ sort: 'ASC' });
{- "items": [
- {
- "name": "Guarded by vendor x",
- "id": 0
}
], - "nextCursor": "string"
}
Sessions are used to maintain access to CDF resources for an extended period of time. The methods available to extend a sessions lifetime are client credentials and token exchange. Sessions depend on the project OIDC configuration and may become invalid in the following cases
Project OIDC configuration has been updated through the update project endpoint. This action invalidates all of the project's sessions.
The session was invalidated through the identity provider.
Create sessions
A request containing the information needed to create a session.
Array of CreateSessionWithClientCredentialsRequest (object) or CreateSessionWithTokenExchangeRequest (object) (CreateSessionRequest) = 1 items |
{- "items": [
- {
- "clientId": "string",
- "clientSecret": "string"
}
]
}
{- "items": [
- {
- "id": 0,
- "type": "CLIENT_CREDENTIALS",
- "status": "READY",
- "nonce": "string",
- "clientId": "string"
}
]
}
List all sessions in the current project.
status | string Enum: "ready" "active" "cancelled" "revoked" "access_lost" If given, only sessions with the given status are returned. |
cursor | string Cursor to use for paging through results. |
limit | integer <int32> <= 1000 Default: 25 Return up to this many results. Maximum is 1000. Default is 25. |
{- "items": [
- {
- "id": 0,
- "type": "CLIENT_CREDENTIALS",
- "status": "READY",
- "creationTime": 0,
- "expirationTime": 0,
- "clientId": "string"
}
], - "nextCursor": "string",
- "previousCursor": "string"
}
Retrieves sessions with given IDs. The request will fail if any of the IDs does not belong to an existing session.
List of session IDs to retrieve
required | Array of objects [ 1 .. 1000 ] items |
{- "items": [
- {
- "id": 1
}
]
}
{- "items": [
- {
- "id": 105049194919491,
- "type": "TOKEN_EXCHANGE",
- "status": "ACTIVE",
- "creationTime": 1638795559528,
- "expirationTime": 1638795559628
}
]
}
Revoke access to a session. Revocation of a session may in some cases take up to 1 hour to take effect.
A request containing the information needed to revoke sessions.
Array of objects (RevokeSessionRequest) |
{- "items": [
- {
- "id": 0
}
]
}
{- "items": [
- {
- "id": 0,
- "type": "CLIENT_CREDENTIALS",
- "status": "REVOKED",
- "creationTime": 1638795554528,
- "expirationTime": 1638795554528,
- "clientId": "client-123"
}
]
}
Inspect CDF access granted to an IdP issued token
{- "subject": "string",
- "projects": [
- {
- "projectUrlName": "string",
- "groups": [
- 0
]
}
], - "capabilities": [
- {
- "groupsAcl": {
- "actions": [
- "LIST"
], - "scope": {
- "all": { }
}
}, - "projectScope": {
- "allProjects": { }
}
}
]
}
User profiles is an authoritative source of core user profile information (email, name, job title, etc.) for principals based on data from the identity provider configured for the CDF project.
User profiles are first created (usually within a few seconds) when a principal issues a request against a CDF API. We currently don't support automatic exchange of user identity information between the identity provider and CDF, but the profile data is updated regularly with the latest data from the identity provider for the principals issuing requests against a CDF API.
Note that the user profile data is mutable, and any updates in the external identity
provider may also cause updates in this API. Therefore, you cannot use profile data,
for example a user's email, to uniquely identify a principal. The exception is the
userIdentifier
property which is guaranteed to be immutable.
Retrieves the user profile of the principal issuing the request. If a principal doesn't have a user profile, you get a not found (404) response code.
{- "userIdentifier": "abcd",
- "givenName": "Jane",
- "surname": "Doe",
- "email": "jane.doe@example.com",
- "displayName": "Jane Doe",
- "jobTitle": "Software Engineer",
- "lastUpdatedTime": 0
}
List all user profiles in the current project. This operation supports pagination by cursor. The results are ordered alphabetically by name.
limit | integer [ 1 .. 1000 ] Default: 25 Limits the number of results to be returned. The server returns no more than 1000 results even if the specified limit is larger. The default limit is 25. |
cursor | string Example: cursor=4zj0Vy2fo0NtNMb229mI9r1V3YG5NBL752kQz1cKtwo Cursor for paging through results. |
{- "items": [
- {
- "userIdentifier": "abcd",
- "givenName": "Jane",
- "surname": "Doe",
- "email": "jane.doe@example.com",
- "displayName": "Jane Doe",
- "jobTitle": "Software Engineer",
- "lastUpdatedTime": 0
}
], - "nextCursor": "string"
}
Retrieve one or more user profiles indexed by the user identifier in the same CDF project.
Specify a maximum of 1000 unique IDs.
Array of objects (UserIdentifier) |
{- "items": [
- {
- "userIdentifier": "abcd"
}
]
}
{- "items": [
- {
- "userIdentifier": "abcd",
- "givenName": "Jane",
- "surname": "Doe",
- "email": "jane.doe@example.com",
- "displayName": "Jane Doe",
- "jobTitle": "Software Engineer",
- "lastUpdatedTime": 0
}
]
}
Search user profiles in the current project. The result set ordering and match criteria threshold may change over time. This operation does not support pagination.
Query for user profile search.
object | |
limit | integer <int32> [ 1 .. 1000 ] Default: 25 <- Limits the maximum number of results to be returned by single request. The default is 25. |
{- "search": {
- "name": "string"
}, - "limit": 25
}
{- "items": [
- {
- "userIdentifier": "abcd",
- "givenName": "Jane",
- "surname": "Doe",
- "email": "jane.doe@example.com",
- "displayName": "Jane Doe",
- "jobTitle": "Software Engineer",
- "lastUpdatedTime": 0
}
]
}
The assets resource type stores digital representations of objects or groups of objects from the physical world. Assets are organized in hierarchies. For example, a water pump asset can be a part of a subsystem asset on an oil platform asset.
Rate and concurrency limits apply to some of the endpoints. If a request exceeds one of the limits,
it will be throttled with a 429: Too Many Requests
response. More on limit types
and how to avoid being throttled is described
here.
Following limits apply to the List assets, Filter assets, Aggregate assets and Search assets endpoints. These limits apply to all endpoints simultaneously, i.e. requests made to different endpoints are counted together. Please note the additional conditions that apply to the Aggregate assets endpoint, as this endpoint provides the most resource-consuming operations.
Limit | Per project | Per user (identity) |
---|---|---|
Rate | 30 rps total out of which no more than 15 rps to Aggregate |
20 rps out of which no more than 10 rps to Aggregate |
Concurrency | 15 parallel requests out of which no more than 6 to Aggregate |
10 parallel requests out of which no more than 4 to Aggregate |
The aggregation API lets you compute aggregated results on assets, such as getting the count of all assets in a project, checking different names and descriptions of assets in your project, etc.
Filters behave the same way as for the Filter assets
endpoint.
In text properties, the values are aggregated in a case-insensitive manner.
aggregateFilter
works similarly to advancedFilter
but always applies to aggregate properties.
For instance, in case of an aggregation for the source
property, only the values (aka buckets) of the source
property can be filtered out.
This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage.
It is a subject of the new throttling schema (limited request rate and concurrency).
Please check Assets resource description for more information.
aggregate | string Value: "count" Type of aggregation to apply.
| ||||||||||||||||||||||||||||
(BoolFilter (and (object) or or (object) or not (object))) or (LeafFilter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object))) A filter DSL (Domain Specific Language) to define advanced filter queries. See more information about filtering DSL here. Supported properties:
Note: Filtering on the | |||||||||||||||||||||||||||||
object (Filter) Filter on assets with strict matching. |
{- "aggregate": "count",
- "advancedFilter": {
- "or": [
- {
- "not": {
- "and": [
- {
- "equals": {
- "property": [
- "metadata",
- "asset_type"
], - "value": "gas pump"
}
}, - {
- "in": {
- "property": [
- "source"
], - "values": [
- "blueprint",
- "inventory"
]
}
}, - {
- "range": {
- "property": [
- "dataSetId"
], - "gte": 1,
- "lt": 10
}
}
]
}
}, - {
- "and": [
- {
- "containsAny": {
- "property": [
- "labels"
], - "values": [
- "pump",
- "cooler"
]
}
}, - {
- "equals": {
- "property": [
- "parentId"
], - "value": 95867294876
}
}
]
}, - {
- "search": {
- "property": [
- "description"
], - "value": "My favorite pump"
}
}
]
}, - "filter": {
- "name": "string",
- "parentIds": [
- 1
], - "parentExternalIds": [
- "my.known.id"
], - "rootIds": [
- {
- "id": 1
}
], - "assetSubtreeIds": [
- {
- "id": 1
}
], - "dataSetIds": [
- {
- "id": 1
}
], - "metadata": {
- "property1": "string",
- "property2": "string"
}, - "source": "string",
- "createdTime": {
- "max": 0,
- "min": 0
}, - "lastUpdatedTime": {
- "max": 0,
- "min": 0
}, - "root": true,
- "externalIdPrefix": "my.known.prefix",
- "labels": {
- "containsAny": [
- {
- "externalId": "my.known.id"
}
]
}, - "geoLocation": {
- "relation": "INTERSECTS",
- "shape": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}
}
}
}
{- "items": [
- {
- "count": 10
}
]
}
You can create a maximum of 1000 assets per request.
List of the assets to create. You can create a maximum of 1000 assets per request.
required | Array of objects (DataExternalAssetItem) [ 1 .. 1000 ] items |
{- "items": [
- {
- "externalId": "my.known.id",
- "name": "string",
- "parentId": 1,
- "parentExternalId": "my.known.id",
- "description": "string",
- "dataSetId": 1,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "source": "string",
- "labels": [
- {
- "externalId": "my.known.id"
}
], - "geoLocation": {
- "type": "Feature",
- "geometry": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}, - "properties": { }
}
}
]
}
{- "items": [
- {
- "createdTime": 0,
- "lastUpdatedTime": 0,
- "rootId": 1,
- "aggregates": {
- "childCount": 0,
- "depth": 0,
- "path": [
- {
- "id": 1
}
]
}, - "parentId": 1,
- "parentExternalId": "my.known.id",
- "externalId": "my.known.id",
- "name": "string",
- "description": "string",
- "dataSetId": 1,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "source": "string",
- "labels": [
- {
- "externalId": "my.known.id"
}
], - "geoLocation": {
- "type": "Feature",
- "geometry": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}, - "properties": { }
}, - "id": 1
}
]
}
Delete assets. By default, recursive=false
and the request would fail if attempting to delete assets that are referenced as parent by other assets. To delete such assets and all its descendants, set recursive to true. The limit of the request does not include the number of descendants that are deleted.
required | Array of AssetInternalId (object) or AssetExternalId (object) (AssetIdEither) [ 1 .. 1000 ] items |
recursive | boolean Default: false Recursively delete all asset subtrees under the specified IDs. |
ignoreUnknownIds | boolean Default: false Ignore IDs and external IDs that are not found |
{- "items": [
- {
- "id": 1
}
], - "recursive": false,
- "ignoreUnknownIds": false
}
{ }
Retrieve a list of assets in the same project. This operation supports pagination by cursor. Apply Filtering and Advanced filtering criteria to select a subset of assets.
Advanced filter lets you create complex filtering expressions that combine simple operations,
such as equals
, prefix
, exists
, etc., using boolean operators and
, or
, and not
.
It applies to basic fields as well as metadata.
See the advancedFilter
attribute in the example.
See more information about filtering DSL here.
Leaf filter | Supported fields | Description |
---|---|---|
containsAll |
Array type fields | Only includes results which contain all of the specified values. {"containsAll": {"property": ["property"], "values": [1, 2, 3]}} |
containsAny |
Array type fields | Only includes results which contain at least one of the specified values. {"containsAny": {"property": ["property"], "values": [1, 2, 3]}} |
equals |
Non-array type fields | Only includes results that are equal to the specified value. {"equals": {"property": ["property"], "value": "example"}} |
exists |
All fields | Only includes results where the specified property exists (has value). {"exists": {"property": ["property"]}} |
in |
Non-array type fields | Only includes results that are equal to one of the specified values. {"in": {"property": ["property"], "values": [1, 2, 3]}} |
prefix |
String type fields | Only includes results which start with the specified value. {"prefix": {"property": ["property"], "value": "example"}} |
range |
Non-array type fields | Only includes results that fall within the specified range. {"range": {"property": ["property"], "gt": 1, "lte": 5}} Supported operators: gt , lt , gte , lte |
search |
["name"] , ["description"] |
Introduced to provide functional parity with /assets/search endpoint. {"search": {"property": ["property"], "value": "example"}} |
The search
leaf filter provides functional parity with the /assets/search
endpoint.
It's available only for the ["description"]
and ["name"]
properties. When specifying only this filter with no explicit ordering,
behavior is the same as of the /assets/search/
endpoint without specifying filters.
Explicit sorting overrides the default ordering by relevance.
It's possible to use the search
leaf filter as any other leaf filter for creating complex queries.
See the search
filter in the advancedFilter
attribute in the example.
and
and or
clauses must have at least one elementproperty
array of each leaf filter has the following limitations:containsAll
, containsAny
, and in
filter values
array size must be in the range [1, 100]containsAll
, containsAny
, and in
filter values
array must contain elements of a primitive type (number, string)range
filter must have at least one of gt
, gte
, lt
, lte
attributes.
But gt
is mutually exclusive to gte
, while lt
is mutually exclusive to lte
.
At least one of the bounds must be specified.gt
, gte
, lt
, lte
in the range
filter must be a primitive valuesearch
filter value
must not be blank and the length must be in the range [1, 128]externalId
- 255name
- 128 for the search
filter and 255 for other filtersdescription
- 128 for the search
filter and 255 for other filterslabels
item - 255source
- 128metadata
key - 128By default, assets are sorted by id
in the ascending order.
Use the search
leaf filter to sort the results by relevance.
Sorting by other fields can be explicitly requested. The order
field is optional
and defaults to desc
for _score_
and asc
for all other fields.
The nulls
field is optional and defaults to auto
. auto
is translated to
last
for the asc
order and to first
for the desc
order by the service.
Partitions are done independently of sorting; there's no guarantee of the sort order between elements from different partitions.
See the sort
attribute in the example.
In case the nulls
attribute has the auto
value or the attribute isn't specified,
null (missing) values are considered to be bigger than any other values.
They are placed last when sorting in the asc
order and first when sorting in desc
.
Otherwise, missing values are placed according to the nulls
attribute (last or first), and their placement doesn't depend on the order
value.
Values, such as empty strings, aren't considered as nulls.
Use a special sort property _score_
when sorting by relevance.
The more filters a particular asset matches, the higher its score is. This can be useful,
for example, when building UIs. Let's assume we want exact matches to be be displayed above matches by
prefix as in the request below. An asset named pump
will match both equals
and prefix
filters and, therefore, have higher score than assets with names like pump valve
that match only prefix
filter.
"advancedFilter" : {
"or" : [
{
"equals": {
"property": ["name"],
"value": "pump"
}
},
{
"prefix": {
"property": ["name"],
"value": "pump"
}
}
]
},
"sort": [
{
"property" : ["_score_"]
}
]
This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage. It is a subject of the new throttling schema (limited request rate and concurrency). Please check Assets resource description for more information.
object (Filter) Filter on assets with strict matching. | |||||||||||||||||||||||||||||
(BoolFilter (and (object) or or (object) or not (object))) or (LeafFilter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object))) A filter DSL (Domain Specific Language) to define advanced filter queries. See more information about filtering DSL here. Supported properties:
Note: Filtering on the | |||||||||||||||||||||||||||||
limit | integer <int32> [ 1 .. 1000 ] Default: 100 Limits the number of results to return. | ||||||||||||||||||||||||||||
Array of objects (AssetSortProperty) [ 1 .. 2 ] items Sort by array of selected properties. | |||||||||||||||||||||||||||||
cursor | string | ||||||||||||||||||||||||||||
aggregatedProperties | Array of strings (AggregatedProperty) Items Enum: "childCount" "path" "depth" Set of aggregated properties to include | ||||||||||||||||||||||||||||
partition | string (Partition) Splits the data set into To prevent unexpected problems and maximize read throughput, you should at most use 10 (N <= 10) partitions. When using more than 10 partitions, CDF may reduce the number of partitions silently.
For example, CDF may reduce the number of partitions to In future releases of the resource APIs, Cognite may reject requests if you specify more than 10 partitions. When Cognite enforces this behavior, the requests will result in a 400 Bad Request status. |
{- "filter": {
- "name": "string",
- "parentIds": [
- 1
], - "parentExternalIds": [
- "my.known.id"
], - "rootIds": [
- {
- "id": 1
}
], - "assetSubtreeIds": [
- {
- "id": 1
}
], - "dataSetIds": [
- {
- "id": 1
}
], - "metadata": {
- "property1": "string",
- "property2": "string"
}, - "source": "string",
- "createdTime": {
- "max": 0,
- "min": 0
}, - "lastUpdatedTime": {
- "max": 0,
- "min": 0
}, - "root": true,
- "externalIdPrefix": "my.known.prefix",
- "labels": {
- "containsAny": [
- {
- "externalId": "my.known.id"
}
]
}, - "geoLocation": {
- "relation": "INTERSECTS",
- "shape": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}
}
}, - "advancedFilter": {
- "or": [
- {
- "not": {
- "and": [
- {
- "equals": {
- "property": [
- "metadata",
- "asset_type"
], - "value": "gas pump"
}
}, - {
- "in": {
- "property": [
- "source"
], - "values": [
- "blueprint",
- "inventory"
]
}
}, - {
- "range": {
- "property": [
- "dataSetId"
], - "gte": 1,
- "lt": 10
}
}
]
}
}, - {
- "and": [
- {
- "containsAny": {
- "property": [
- "labels"
], - "values": [
- "pump",
- "cooler"
]
}
}, - {
- "equals": {
- "property": [
- "parentId"
], - "value": 95867294876
}
}
]
}, - {
- "search": {
- "property": [
- "description"
], - "value": "My favorite pump"
}
}
]
}, - "limit": 100,
- "sort": [
- {
- "property": [
- "createdTime"
], - "order": "desc"
}, - {
- "property": [
- "metadata",
- "customMetadataKey"
], - "nulls": "first"
}
], - "cursor": "4zj0Vy2fo0NtNMb229mI9r1V3YG5NBL752kQz1cKtwo",
- "aggregatedProperties": [
- "childCount"
], - "partition": "1/10"
}
{- "items": [
- {
- "createdTime": 0,
- "lastUpdatedTime": 0,
- "rootId": 1,
- "aggregates": {
- "childCount": 0,
- "depth": 0,
- "path": [
- {
- "id": 1
}
]
}, - "parentId": 1,
- "parentExternalId": "my.known.id",
- "externalId": "my.known.id",
- "name": "string",
- "description": "string",
- "dataSetId": 1,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "source": "string",
- "labels": [
- {
- "externalId": "my.known.id"
}
], - "geoLocation": {
- "type": "Feature",
- "geometry": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}, - "properties": { }
}, - "id": 1
}
], - "nextCursor": "string"
}
List all assets, or only the assets matching the specified query.
This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage. It is a subject of the new throttling schema (limited request rate and concurrency). Please check Assets resource description for more information.
limit | integer [ 1 .. 1000 ] Default: 100 Limits the number of results to be returned. The maximum results returned by the server is 1000 even if you specify a higher limit. |
cursor | string Example: cursor=4zj0Vy2fo0NtNMb229mI9r1V3YG5NBL752kQz1cKtwo Cursor for paging through results. |
includeMetadata | boolean Default: true Whether the metadata field should be returned or not. |
name | string (AssetName) [ 1 .. 140 ] characters The name of the asset. |
parentIds | string <jsonArray(int64)> (JsonArrayInt64) Example: parentIds=[363848954441724, 793045462540095, 1261042166839739] List only assets that have one of the parentIds as a parent. The parentId for root assets is null. |
parentExternalIds | string <jsonArray(string)> (JsonArrayString) Example: parentExternalIds=[externalId_1, externalId_2, externalId_3] List only assets that have one of the parentExternalIds as a parent. The parentId for root assets is null. |
rootIds | string <jsonArray(int64)> (JsonArrayInt64) Deprecated Example: rootIds=[363848954441724, 793045462540095, 1261042166839739] This parameter is deprecated. Use assetSubtreeIds instead. List only assets that have one of the rootIds as a root asset. A root asset is its own root asset. |
assetSubtreeIds | string <jsonArray(int64)> (JsonArrayInt64) Example: assetSubtreeIds=[363848954441724, 793045462540095, 1261042166839739] List only assets that are in a subtree rooted at any of these assetIds (including the roots given). If the total size of the given subtrees exceeds 100,000 assets, an error will be returned. |
assetSubtreeExternalIds | string <jsonArray(string)> (JsonArrayString) Example: assetSubtreeExternalIds=[externalId_1, externalId_2, externalId_3] List only assets that are in a subtree rooted at any of these assetExternalIds. If the total size of the given subtrees exceeds 100,000 assets, an error will be returned. |
source | string <= 128 characters The source of the asset, for example which database it's from. |
root | boolean Default: false Whether the filtered assets are root assets, or not. Set to True to only list root assets. |
minCreatedTime | integer <int64> (EpochTimestamp) >= 0 The number of milliseconds since 00:00:00 Thursday, 1 January 1970, Coordinated Universal Time (UTC), minus leap seconds. |
maxCreatedTime | integer <int64> (EpochTimestamp) >= 0 The number of milliseconds since 00:00:00 Thursday, 1 January 1970, Coordinated Universal Time (UTC), minus leap seconds. |
minLastUpdatedTime | integer <int64> (EpochTimestamp) >= 0 The number of milliseconds since 00:00:00 Thursday, 1 January 1970, Coordinated Universal Time (UTC), minus leap seconds. |
maxLastUpdatedTime | integer <int64> (EpochTimestamp) >= 0 The number of milliseconds since 00:00:00 Thursday, 1 January 1970, Coordinated Universal Time (UTC), minus leap seconds. |
externalIdPrefix | string (CogniteExternalIdPrefix) <= 255 characters Example: externalIdPrefix=my.known.prefix Filter by this (case-sensitive) prefix for the external ID. |
partition | string Example: partition=1/10 Splits the data set into To prevent unexpected problems and maximize read throughput, you should at most use 10 (N <= 10) partitions. When using more than 10 partitions, CDF may reduce the number of partitions silently.
For example, CDF may reduce the number of partitions to In future releases of the resource APIs, Cognite may reject requests if you specify more than 10 partitions. When Cognite enforces this behavior, the requests will result in a 400 Bad Request status. |
const assets = await client.assets.list({ filter: { name: '21PT1019' } });
{- "items": [
- {
- "createdTime": 0,
- "lastUpdatedTime": 0,
- "rootId": 1,
- "aggregates": {
- "childCount": 0,
- "depth": 0,
- "path": [
- {
- "id": 1
}
]
}, - "parentId": 1,
- "parentExternalId": "my.known.id",
- "externalId": "my.known.id",
- "name": "string",
- "description": "string",
- "dataSetId": 1,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "source": "string",
- "labels": [
- {
- "externalId": "my.known.id"
}
], - "geoLocation": {
- "type": "Feature",
- "geometry": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}, - "properties": { }
}, - "id": 1
}
], - "nextCursor": "string"
}
Retrieve an asset by its ID. If you want to retrieve assets by externalIds, use Retrieve assets instead.
id required | integer <int64> (CogniteInternalId) [ 1 .. 9007199254740991 ] A server-generated ID for the object. |
const assets = await client.assets.retrieve([{id: 123}, {externalId: 'abc'}]);
{- "createdTime": 0,
- "lastUpdatedTime": 0,
- "rootId": 1,
- "aggregates": {
- "childCount": 0,
- "depth": 0,
- "path": [
- {
- "id": 1
}
]
}, - "parentId": 1,
- "parentExternalId": "my.known.id",
- "externalId": "my.known.id",
- "name": "string",
- "description": "string",
- "dataSetId": 1,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "source": "string",
- "labels": [
- {
- "externalId": "my.known.id"
}
], - "geoLocation": {
- "type": "Feature",
- "geometry": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}, - "properties": { }
}, - "id": 1
}
Retrieve assets by IDs or external IDs. If you specify to get aggregates, then be aware that the aggregates are eventually consistent.
All provided IDs and external IDs must be unique.
required | Array of AssetInternalId (object) or AssetExternalId (object) (AssetIdEither) [ 1 .. 1000 ] items |
ignoreUnknownIds | boolean Default: false Ignore IDs and external IDs that are not found |
aggregatedProperties | Array of strings (AggregatedProperty) Items Enum: "childCount" "path" "depth" Set of aggregated properties to include |
{- "items": [
- {
- "id": 1
}
], - "ignoreUnknownIds": false,
- "aggregatedProperties": [
- "childCount"
]
}
{- "items": [
- {
- "createdTime": 0,
- "lastUpdatedTime": 0,
- "rootId": 1,
- "aggregates": {
- "childCount": 0,
- "depth": 0,
- "path": [
- {
- "id": 1
}
]
}, - "parentId": 1,
- "parentExternalId": "my.known.id",
- "externalId": "my.known.id",
- "name": "string",
- "description": "string",
- "dataSetId": 1,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "source": "string",
- "labels": [
- {
- "externalId": "my.known.id"
}
], - "geoLocation": {
- "type": "Feature",
- "geometry": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}, - "properties": { }
}, - "id": 1
}
]
}
Fulltext search for assets based on result relevance. Primarily meant for human-centric use-cases, not for programs, since matching and ordering may change over time. Additional filters can also be specified. This operation doesn't support pagination.
This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage. It is a subject of the new throttling schema (limited request rate and concurrency). Please check Assets resource description for more information.
Search query
object (Filter) Filter on assets with strict matching. | |
limit | integer <int32> [ 1 .. 1000 ] Default: 100 Limits the number of results to return. |
object (Search) Fulltext search for assets. Primarily meant for for human-centric use-cases, not for programs. The query parameter uses a different search algorithm than the deprecated name and description parameters, and will generally give much better results. |
{- "filter": {
- "parentIds": [
- 1293812938,
- 293823982938
]
}, - "search": {
- "name": "flow",
- "description": "upstream"
}
}
{- "items": [
- {
- "createdTime": 0,
- "lastUpdatedTime": 0,
- "rootId": 1,
- "aggregates": {
- "childCount": 0,
- "depth": 0,
- "path": [
- {
- "id": 1
}
]
}, - "parentId": 1,
- "parentExternalId": "my.known.id",
- "externalId": "my.known.id",
- "name": "string",
- "description": "string",
- "dataSetId": 1,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "source": "string",
- "labels": [
- {
- "externalId": "my.known.id"
}
], - "geoLocation": {
- "type": "Feature",
- "geometry": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}, - "properties": { }
}, - "id": 1
}
]
}
Update the attributes of assets.
All provided IDs and external IDs must be unique. Fields that aren't included in the request aren't changed.
required | Array of AssetChangeById (object) or AssetChangeByExternalId (object) (AssetChange) [ 1 .. 1000 ] items |
{- "items": [
- {
- "update": {
- "externalId": {
- "set": "my.known.id"
}, - "name": {
- "set": "string"
}, - "description": {
- "set": "string"
}, - "dataSetId": {
- "set": 0
}, - "metadata": {
- "set": {
- "key1": "value1",
- "key2": "value2"
}
}, - "source": {
- "set": "string"
}, - "parentId": {
- "set": 1
}, - "parentExternalId": {
- "set": "my.known.id"
}, - "labels": {
- "add": [
- {
- "externalId": "my.known.id"
}
], - "remove": [
- {
- "externalId": "my.known.id"
}
]
}, - "geoLocation": {
- "set": {
- "type": "Feature",
- "geometry": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}, - "properties": { }
}
}
}, - "id": 1
}
]
}
{- "items": [
- {
- "createdTime": 0,
- "lastUpdatedTime": 0,
- "rootId": 1,
- "aggregates": {
- "childCount": 0,
- "depth": 0,
- "path": [
- {
- "id": 1
}
]
}, - "parentId": 1,
- "parentExternalId": "my.known.id",
- "externalId": "my.known.id",
- "name": "string",
- "description": "string",
- "dataSetId": 1,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "source": "string",
- "labels": [
- {
- "externalId": "my.known.id"
}
], - "geoLocation": {
- "type": "Feature",
- "geometry": {
- "type": "Point",
- "coordinates": [
- 0,
- 0
]
}, - "properties": { }
}, - "id": 1
}
]
}
A time series consists of a sequence of data points connected to a single asset. For example, a water pump asset can have a temperature time series that records a data point in units of °C every second.
A single asset can have several time series. The water pump could have additional time series measuring pressure within the pump, rpm, flow volume, power consumption, and more.Time series store data points as either numbers or strings. This is controlled by the is_string flag on the time series object. Numerical data points can be aggregated before they are returned from a query (e.g., to find the average temperature for a day). String data points, on the other hand, can't be aggregated by CDF but can store arbitrary information like states (e.g., “open”/”closed”) or more complex information (JSON).
Cognite stores discrete data points, but the underlying
process measured by the data points can vary continuously. When interpolating
between data points, we can either assume that each value stays the same until
the next measurement or linearly changes between the two measurements.
The isStep
flag controls this on the time series object. For example,
if we estimate the average over a time containing two data points, the average
will either be close to the first (isStep
) or close to the mean of the two (not
isStep
).
A data point stores a single piece of information, a number or a string, associated with a specific time. Data points are identified by their timestamps, measured in milliseconds since the unix epoch -- 00:00:00.000, January 1st, 1970. The time series service accepts timestamps in the range from 00:00:00.000, January 1st, 1900 through 23:59:59.999, December 31st, 2099 (in other words, every millisecond in the two centuries from 1900 to but not including 2100). Negative timestamps are used to define dates before 1970. Milliseconds is the finest time resolution supported by CDF, i.e., fractional milliseconds are not supported. Leap seconds are not counted.
Numerical data points can be aggregated before they are retrieved from CDF. This allows for faster queries by reducing the amount of data transferred. You can aggregate data points by specifying one or more aggregates (e.g., average, minimum, maximum) as well as the time granularity over which the aggregates should be applied (e.g., “1h” for one hour).
Aggregates are aligned to the start time modulo the granularity unit. For example, if you ask for daily average temperatures since Monday afternoon last week, the first aggregated data point will contain averages for Monday, the second for Tuesday, etc. Determining aggregate alignment without considering data point timestamps allows CDF to pre-calculate aggregates (e.g., to quickly return daily average temperatures for a year). Consequently, aggregating over 60 minutes can return a different result than aggregating over 1 hour because the two queries will be aligned differently. Asset references obtained from a time series - through its asset ID - may be invalid simply by the non-transactional nature of HTTP. They are maintained in an eventually consistent manner.
The aggregation API allows you to compute aggregated results from a set of time series, such as
getting the number of time series in a project or checking what assets the different time series
in your project are associated with (along with the number of time series for each asset).
By specifying filter
and/or advancedFilter
, the aggregation will take place only over those
time series that match the filters. filter
and advancedFilter
behave the same way as in the
list
endpoint.
aggregate
field is not specified the request body, is to return the
number of time series that match the filters (if any), which is the same behavior as when the
aggregate
field is set to count
.
The following requests will both return the total number of
time series whose name
begins with pump
:
{
"advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}
and
{
"aggregate": "count",
"advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}
The response might be:
{"items": [{"count": 42}]}
aggregate
to uniqueValues
and specifying a property in properties
(this field is an array, but currently only supports one property) will
return all unique values (up to a maximum of 1000) that are taken on by that property
across all the time series that match the filters, as well as the number of time series that
have each of those property values.
This example request finds all the unique asset ids that are
referenced by the time series in your project whose name
begins with pump
:
{
"aggregate": "uniqueValues",
"properties": [{"property": ["assetId"]}],
"advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}
The response might be the following, saying that 23 time series are associated with asset 18 and 107 time series are associated with asset 76:
{
"items": [
{"values": ["18"], "count": 23},
{"values": ["76"], "count": 107}
]
}
aggregate
to cardinalityValues
will instead return the approximate number of
distinct values that are taken on by the given property among the matching time series.
Example request:
{
"aggregate": "cardinalityValues",
"properties": [{"property": ["assetId"]}],
"advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}
The result is likely exact when the set of unique values is small. In this example, there are likely two distinct asset ids among the matching time series:
{"items": [{"count": 2}]}
aggregate
to uniqueProperties
will return the set of unique properties whose property
path begins with path
(which can currently only be ["metadata"]
) that are contained in the time series that match the filters.
Example request:
{
"aggregate": "uniqueProperties",
"path": ["metadata"],
"advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}
The result contains all the unique metadata keys in the time series whose name
begins with
pump
, and the number of time series that contains each metadata key:
{
"items": [
{"values": [{"property": ["metadata", "tag"]}], "count": 43},
{"values": [{"property": ["metadata", "installationDate"]}], "count": 97}
]
}
aggregate
to cardinalityProperties
will instead return the approximate number of
different property keys whose path begins with path
(which can currently only be ["metadata"]
, meaning that this can only be used to count the approximate number of distinct metadata keys among the matching time series).
Example request:
{
"aggregate": "cardinalityProperties",
"path": ["metadata"],
"advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}}
}
The result is likely exact when the set of unique values is small. In this example, there are likely two distinct metadata keys among the matching time series:
{"items": [{"count": 2}]}
The aggregateFilter
field may be specified if aggregate
is set to cardinalityProperties
or uniqueProperties
. The structure of this field is similar to that of advancedFilter
, except that the set of leaf filters is smaller (in
, prefix
, and range
), and that none of the leaf filters specify a property. Unlike advancedFilter
, which is applied before the aggregation (in order to restrict the set of time series that the aggregation operation should be applied to), aggregateFilter
is applied after the initial aggregation has been performed, in order to restrict the set of results.
aggregateFilter
.
When aggregate
is set to uniqueProperties
, the result set contains a number of property paths, each with an associated count that shows how many time series contained that property (among those time series that matched the filter
and advancedFilter
, if they were specified) . If aggregateFilter
is specified, it will restrict the property paths included in the output. Let us add an aggregateFilter
to the uniqueProperties
example from above:
{
"aggregate": "uniqueProperties",
"path": ["metadata"],
"advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}},
"aggregateFilter": {"prefix": {"value": "t"}}
}
Now, the result only contains those metadata properties whose key begins with t
(but it will be the same set of metadata properties that begin with t
as in the original query without aggregateFilter
, and the counts will be the same):
{
"items": [
{"values": [{"property": ["metadata", "tag"]}], "count": 43}
]
}
Similarly, adding aggregateFilter
to cardinalityProperties
will return the approximate number of properties whose property key matches aggregateFilter
from those time series matching the filter
and advancedFilter
(or from all time series if neither filter
nor aggregateFilter
are specified):
{
"aggregate": "cardinalityProperties",
"path": ["metadata"],
"advancedFilter": {"prefix": {"property": ["name"], "value": "pump"}},
"aggregateFilter": {"prefix": {"value": "t"}}
}
As we saw above, only one property matches:
{"items": [{"count": 1}]}
Note that aggregateFilter
is also accepted when aggregate
is set to cardinalityValues
or cardinalityProperties
. For those aggregations, the effect of any aggregateFilter
could also be achieved via a similar advancedFilter
. However, aggregateFilter
is not accepted when aggregate
is omitted or set to count
.
Rate and concurrency limits apply this endpoint. If a request exceeds one of the limits,
it will be throttled with a 429: Too Many Requests
response. More on limit types
and how to avoid being throttled is described
here.
Limit | Per project | Per user (identity) |
---|---|---|
Rate | 15 requests per second | 10 requests per second |
Concurrency | 6 concurrent requests | 4 concurrent requests |
Aggregates the time series that match the given criteria.
(Boolean filter (and (object) or or (object) or not (object))) or (Leaf filter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object))) (TimeSeriesFilterLanguage) A filter DSL (Domain Specific Language) to define advanced filter queries. At the top level, an | |
object (Filter) | |
(Boolean filter (and (object) or or (object) or not (object))) or (Leaf filter (in (object) or range (object) or prefix (object))) (TimeSeriesAggregateFilter) A filter DSL (Domain Specific Language) to define aggregate filters. | |
aggregate | string Value: "count" The |
{- "advancedFilter": {
- "or": [
- {
- "not": {
- "and": [
- {
- "equals": {
- "property": [
- "metadata",
- "manufacturer"
], - "value": "acme"
}
}, - {
- "in": {
- "property": [
- "name"
], - "values": [
- "pump-1-temperature",
- "motor-9-temperature"
]
}
}, - {
- "range": {
- "property": [
- "dataSetId"
], - "gte": 1,
- "lt": 10
}
}
]
}
}, - {
- "and": [
- {
- "equals": {
- "property": [
- "assetId"
], - "value": 1234
}
}, - {
- "equals": {
- "property": [
- "description"
], - "value": "Temperature in Celsius"
}
}
]
}
]
}, - "filter": {
- "name": "string",
- "unit": "string",
- "isString": true,
- "isStep": true,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "assetIds": [
- 363848954441724,
- 793045462540095,
- 1261042166839739
], - "assetExternalIds": [
- "my.known.id"
], - "rootAssetIds": [
- 343099548723932,
- 88483999203217
], - "assetSubtreeIds": [
- {
- "id": 1
}
], - "dataSetIds": [
- {
- "id": 1
}
], - "externalIdPrefix": "my.known.prefix",
- "createdTime": {
- "max": 0,
- "min": 0
}, - "lastUpdatedTime": {
- "max": 0,
- "min": 0
}
}, - "aggregateFilter": {
- "and": [
- { }
]
}, - "aggregate": "count"
}
{- "items": [
- {
- "count": 0
}
]
}
Creates one or more time series.
required | Array of objects (PostTimeSeriesMetadataDTO) [ 1 .. 1000 ] items |
{- "items": [
- {
- "externalId": "string",
- "name": "string",
- "legacyName": "string",
- "isString": false,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "unit": "string",
- "assetId": 1,
- "isStep": false,
- "description": "string",
- "securityCategories": [
- 0
], - "dataSetId": 1
}
]
}
{- "items": [
- {
- "id": 1,
- "externalId": "string",
- "name": "string",
- "isString": true,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "unit": "string",
- "assetId": 1,
- "isStep": true,
- "description": "string",
- "securityCategories": [
- 0
], - "dataSetId": 1,
- "createdTime": 0,
- "lastUpdatedTime": 0
}
]
}
Delete data points from time series.
The list of delete requests to perform.
required | Array of QueryWithInternalId (object) or QueryWithExternalId (object) (DatapointsDeleteRequest) [ 1 .. 10000 ] items List of delete filters. |
{- "items": [
- {
- "inclusiveBegin": 1638795554528,
- "exclusiveEnd": 1638795554528,
- "id": 1
}
]
}
{ }
Deletes the time series with the specified IDs and their data points.
Specify a list of the time series to delete.
required | Array of QueryWithInternalId (object) or QueryWithExternalId (object) [ 1 .. 1000 ] items unique List of ID objects. |
ignoreUnknownIds | boolean Default: false Ignore IDs and external IDs that are not found |
{- "items": [
- {
- "id": 1
}
], - "ignoreUnknownIds": false
}
{ }
The advancedFilter
field lets you create complex filtering expressions that combine simple operations,
such as equals
, prefix
, and exists
, by using the Boolean operators and
, or
, and not
.
Filtering applies to basic fields as well as metadata. See the advancedFilter
syntax in the request example.
Leaf filter | Supported fields | Description and example |
---|---|---|
containsAll |
Array type fields | Only includes results which contain all of the specified values. {"containsAll": {"property": ["property"], "values": [1, 2, 3]}} |
containsAny |
Array type fields | Only includes results which contain at least one of the specified values. {"containsAny": {"property": ["property"], "values": [1, 2, 3]}} |
equals |
Non-array type fields | Only includes results that are equal to the specified value. {"equals": {"property": ["property"], "value": "example"}} |
exists |
All fields | Only includes results where the specified property exists (has a value). {"exists": {"property": ["property"]}} |
in |
Non-array type fields | Only includes results that are equal to one of the specified values. {"in": {"property": ["property"], "values": [1, 2, 3]}} |
prefix |
String type fields | Only includes results which start with the specified text. {"prefix": {"property": ["property"], "value": "example"}} |
range |
Non-array type fields | Only includes results that fall within the specified range. {"range": {"property": ["property"], "gt": 1, "lte": 5}} Supported operators: gt , lt , gte , lte |
search |
["name"] and ["description"] |
Introduced to provide functional parity with the /timeseries/search endpoint. {"search": {"property": ["property"], "value": "example"}} |
Property | Type |
---|---|
["description"] |
string |
["externalId"] |
string |
["metadata", "<someCustomKey>"] |
string |
["name"] |
string |
["unit"] |
string |
["assetId"] |
number |
["assetRootId"] |
number |
["createdTime"] |
number |
["dataSetId"] |
number |
["id"] |
number |
["lastUpdatedTime"] |
number |
["isStep"] |
Boolean |
["isString"] |
Boolean |
["accessCategories"] |
array of strings |
["securityCategories"] |
array of numbers |
and
and or
clauses must have at least one element (and at most 99, since each element counts
towards the total clause limit, and so does the and
/or
clause itself).property
array of each leaf filter has the following limitations:property
array must match one of the existing properties (static top-level property or dynamic metadata property).containsAll
, containsAny
, and in
filter values
array size must be in the range [1, 100].containsAll
, containsAny
, and in
filter values
array must contain elements of number or string type (matching the type of the given property).range
filter must have at lest one of gt
, gte
, lt
, lte
attributes.
But gt
is mutually exclusive to gte
, while lt
is mutually exclusive to lte
.gt
, gte
, lt
, lte
in the range
filter must be of number or string type (matching the type of the given property).search
filter value
must not be blank, and the length must be in the range [1, 128], and there
may be at most two search
filters in the entire filter query.value
of a leaf filter that is applied to a string property is 256.By default, time series are sorted by their creation time in ascending order.
Sorting by another property or by several other properties can be explicitly requested via the
sort
field, which must contain a list
of one or more sort specifications. Each sort specification indicates the property
to sort on
and, optionally, the order
in which to sort (defaults to asc
). If multiple sort specifications are
supplied, the results are sorted on the first property, and those with the same value for the first
property are sorted on the second property, and so on.
Partitioning is done independently of sorting; there is no guarantee of sort order between elements from different partitions.
In case the nulls
field has the auto
value, or the field isn't specified, null (missing) values
are considered bigger than any other values. They are placed last when sorting in the asc
order and first in the desc
order. Otherwise, missing values are placed according to
the nulls
field (last
or first
), and their placement won't depend on the order
field. Note that the number zero, empty strings, and empty lists are all considered
not null.
{
"sort": [
{
"property" : ["createdTime"],
"order": "desc",
"nulls": "last"
},
{
"property" : ["metadata", "<someCustomKey>"]
}
]
}
You can sort on the following properties:
Property |
---|
["assetId"] |
["createdTime"] |
["dataSetId"] |
["description"] |
["externalId"] |
["lastUpdatedTime"] |
["metadata", "<someCustomKey>"] |
["name"] |
The sort
array must contain 1 to 2 elements.
object (Filter) | |
(Boolean filter (and (object) or or (object) or not (object))) or (Leaf filter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object))) (TimeSeriesFilterLanguage) A filter DSL (Domain Specific Language) to define advanced filter queries. At the top level, an | |
limit | integer <int32> [ 1 .. 1000 ] Default: 100 Return up to this many results. |
cursor | string |
partition | string (Partition) Splits the data set into To prevent unexpected problems and maximize read throughput, you should at most use 10 (N <= 10) partitions. When using more than 10 partitions, CDF may reduce the number of partitions silently.
For example, CDF may reduce the number of partitions to In future releases of the resource APIs, Cognite may reject requests if you specify more than 10 partitions. When Cognite enforces this behavior, the requests will result in a 400 Bad Request status. |
Array of objects (TimeSeriesSortItem) [ 1 .. 2 ] items Sort by array of selected properties. |
{- "filter": {
- "name": "string",
- "unit": "string",
- "isString": true,
- "isStep": true,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "assetIds": [
- 363848954441724,
- 793045462540095,
- 1261042166839739
], - "assetExternalIds": [
- "my.known.id"
], - "rootAssetIds": [
- 343099548723932,
- 88483999203217
], - "assetSubtreeIds": [
- {
- "id": 1
}
], - "dataSetIds": [
- {
- "id": 1
}
], - "externalIdPrefix": "my.known.prefix",
- "createdTime": {
- "max": 0,
- "min": 0
}, - "lastUpdatedTime": {
- "max": 0,
- "min": 0
}
}, - "advancedFilter": {
- "or": [
- {
- "not": {
- "and": [
- {
- "equals": {
- "property": [
- "metadata",
- "manufacturer"
], - "value": "acme"
}
}, - {
- "in": {
- "property": [
- "name"
], - "values": [
- "pump-1-temperature",
- "motor-9-temperature"
]
}
}, - {
- "range": {
- "property": [
- "dataSetId"
], - "gte": 1,
- "lt": 10
}
}
]
}
}, - {
- "and": [
- {
- "equals": {
- "property": [
- "assetId"
], - "value": 1234
}
}, - {
- "equals": {
- "property": [
- "description"
], - "value": "Temperature in Celsius"
}
}
]
}
]
}, - "limit": 100,
- "cursor": "4zj0Vy2fo0NtNMb229mI9r1V3YG5NBL752kQz1cKtwo",
- "partition": "1/10",
- "sort": [
- {
- "property": [
- "string"
], - "order": "asc",
- "nulls": "first"
}
]
}
{- "items": [
- {
- "id": 1,
- "externalId": "string",
- "name": "string",
- "isString": true,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "unit": "string",
- "assetId": 1,
- "isStep": true,
- "description": "string",
- "securityCategories": [
- 0
], - "dataSetId": 1,
- "createdTime": 0,
- "lastUpdatedTime": 0
}
], - "nextCursor": "string"
}
Insert data points into a time series. You can do this for multiple time series. If you insert a data point with a timestamp that already exists, it will be overwritten with the new value.
The datapoints to insert.
required | Array of DatapointsWithInternalId (object) or DatapointsWithExternalId (object) (DatapointsPostDatapoint) [ 1 .. 10000 ] items |
{- "items": [
- {
- "datapoints": [
- {
- "timestamp": -2208988800000,
- "value": 0
}
], - "id": 1
}
]
}
{ }
List time series. Use nextCursor
to paginate through the results.
limit | integer <int32> [ 1 .. 1000 ] Default: 100 Limits the number of results to return. CDF returns a maximum of 1000 results even if you specify a higher limit. |
includeMetadata | boolean Default: true Whether the metadata field should be returned or not. |
cursor | string Example: cursor=4zj0Vy2fo0NtNMb229mI9r1V3YG5NBL752kQz1cKtwo Cursor for paging through results. |
partition | string Example: partition=1/10 Splits the data set into To prevent unexpected problems and maximize read throughput, you should at most use 10 (N <= 10) partitions. When using more than 10 partitions, CDF may reduce the number of partitions silently.
For example, CDF may reduce the number of partitions to In future releases of the resource APIs, Cognite may reject requests if you specify more than 10 partitions. When Cognite enforces this behavior, the requests will result in a 400 Bad Request status. |
assetIds | string <jsonArray(int64)> (JsonArrayInt64) Example: assetIds=[363848954441724, 793045462540095, 1261042166839739] Gets the time series related to the assets. The format is a list of IDs serialized as a JSON array(int64). Takes [ 1 .. 100 ] unique items. |
rootAssetIds | string <jsonArray(int64)> (JsonArrayInt64) Example: rootAssetIds=[363848954441724, 793045462540095, 1261042166839739] Only includes time series that have a related asset in a tree rooted at any of these root |
externalIdPrefix | string (CogniteExternalIdPrefix) <= 255 characters Example: externalIdPrefix=my.known.prefix Filter by this (case-sensitive) prefix for the external ID. |
const timeseries = await client.timeseries.list({ filter: { assetIds: [1, 2] }});
{- "items": [
- {
- "id": 1,
- "externalId": "string",
- "name": "string",
- "isString": true,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "unit": "string",
- "assetId": 1,
- "isStep": true,
- "description": "string",
- "securityCategories": [
- 0
], - "dataSetId": 1,
- "createdTime": 0,
- "lastUpdatedTime": 0
}
], - "nextCursor": "string"
}
Retrieves a list of data points from multiple time series in a project. This operation supports aggregation and pagination. Learn more about aggregation.
Note: when start
isn't specified in the top level and for an individual item, it will default to epoch 0, which is 1 January, 1970, thus
excluding potential existent data points before 1970. start
needs to be specified as a negative number to get data points before 1970.
Specify parameters to query for multiple data points. If you omit fields in individual data point query items, the top-level field values are used. For example, you can specify a default limit for all items by setting the top-level limit field. If you request aggregates, only the aggregates are returned. If you don't request any aggregates, all data points are returned.
required | Array of QueryWithInternalId (object) or QueryWithExternalId (object) (DatapointsQuery) [ 1 .. 100 ] items |
integer or string (TimestampOrStringStart) Get datapoints starting from, and including, this time. The format is N[timeunit]-ago where timeunit is w,d,h,m,s. Example: '2d-ago' gets datapoints that are up to 2 days old. You can also specify time in milliseconds since epoch. Note that for aggregates, the start time is rounded down to a whole granularity unit (in UTC timezone). Daily granularities (d) are rounded to 0:00 AM; hourly granularities (h) to the start of the hour, etc. | |
integer or string (TimestampOrStringEnd) Get datapoints up to, but excluding, this point in time. Same format as for start. Note that when using aggregates, the end will be rounded up such that the last aggregate represents a full aggregation interval containing the original end, where the interval is the granularity unit times the granularity multiplier. For granularity 2d, the aggregation interval is 2 days, if end was originally 3 days after the start, it will be rounded to 4 days after the start. | |
limit | integer <int32> Default: 100 Returns up to this number of data points. The maximum is 100000 non-aggregated data points and 10000 aggregated data points in total across all queries in a single request. |
aggregates | Array of strings (Aggregate) [ 1 .. 10 ] items unique Items Enum: "average" "max" "min" "count" "sum" "interpolation" "stepInterpolation" "totalVariation" "continuousVariance" "discreteVariance" Specify the aggregates to return. Omit to return data points without aggregation. |
granularity | string The time granularity size and unit to aggregate over. Valid entries are 'day, hour, minute, second', or short forms 'd, h, m, s', or a multiple of these indicated by a number as a prefix. For 'second' and 'minute', the multiple must be an integer between 1 and 120 inclusive; for 'hour' and 'day', the multiple must be an integer between 1 and 100000 inclusive. For example, a granularity '5m' means that aggregates are calculated over 5 minutes. This field is required if aggregates are specified. |
includeOutsidePoints | boolean Default: false Defines whether to include the last data point before the requested time period and the first one after. This option can be useful for interpolating data. It's not available for aggregates or cursors.
Note: If there are more than |
ignoreUnknownIds | boolean Default: false Ignore IDs and external IDs that are not found |
{- "items": [
- {
- "start": 0,
- "end": 0,
- "limit": 0,
- "aggregates": [
- "average"
], - "granularity": "1h",
- "includeOutsidePoints": false,
- "cursor": "string",
- "id": 1
}
], - "start": 0,
- "end": 0,
- "limit": 100,
- "aggregates": [
- "average"
], - "granularity": "1h",
- "includeOutsidePoints": false,
- "ignoreUnknownIds": false
}
{- "items": [
- {
- "id": 1,
- "externalId": "string",
- "isString": false,
- "isStep": true,
- "unit": "string",
- "nextCursor": "string",
- "datapoints": [
- {
- "timestamp": 1638795554528,
- "average": 0,
- "max": 0,
- "min": 0,
- "count": 0,
- "sum": 0,
- "interpolation": 0,
- "stepInterpolation": 0,
- "continuousVariance": 0,
- "discreteVariance": 0,
- "totalVariation": 0
}
]
}
]
}
Retrieves the latest data point in one or more time series. Note that the latest data point in a time series is the one with the highest timestamp, which is not necessarily the one that was ingested most recently.
The list of the queries to perform.
required | Array of QueryWithInternalId (object) or QueryWithExternalId (object) (LatestDataBeforeRequest) [ 1 .. 100 ] items List of latest queries |
ignoreUnknownIds | boolean Default: false Ignore IDs and external IDs that are not found |
{- "items": [
- {
- "before": "now",
- "id": 1
}
], - "ignoreUnknownIds": false
}
{- "items": [
- {
- "id": 1,
- "externalId": "string",
- "isString": true,
- "isStep": true,
- "unit": "string",
- "nextCursor": "string",
- "datapoints": [
- {
- "timestamp": 1638795554528,
- "value": 0
}
]
}
]
}
Retrieves one or more time series by ID or external ID. The response returns the time series in the same order as in the request.
List of the IDs of the time series to retrieve.
required | Array of QueryWithInternalId (object) or QueryWithExternalId (object) [ 1 .. 1000 ] items unique List of ID objects. |
ignoreUnknownIds | boolean Default: false Ignore IDs and external IDs that are not found |
{- "items": [
- {
- "id": 1
}
], - "ignoreUnknownIds": false
}
{- "items": [
- {
- "id": 1,
- "externalId": "string",
- "name": "string",
- "isString": true,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "unit": "string",
- "assetId": 1,
- "isStep": true,
- "description": "string",
- "securityCategories": [
- 0
], - "dataSetId": 1,
- "createdTime": 0,
- "lastUpdatedTime": 0
}
]
}
Fulltext search for time series based on result relevance. Primarily meant for human-centric use cases, not for programs, since matching and order may change over time. Additional filters can also be specified. This operation does not support pagination.
object (Filter) | |
object (Search) | |
limit | integer <int32> [ 1 .. 1000 ] Default: 100 Return up to this many results. |
{- "filter": {
- "name": "string",
- "unit": "string",
- "isString": true,
- "isStep": true,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "assetIds": [
- 363848954441724,
- 793045462540095,
- 1261042166839739
], - "assetExternalIds": [
- "my.known.id"
], - "rootAssetIds": [
- 343099548723932,
- 88483999203217
], - "assetSubtreeIds": [
- {
- "id": 1
}
], - "dataSetIds": [
- {
- "id": 1
}
], - "externalIdPrefix": "my.known.prefix",
- "createdTime": {
- "max": 0,
- "min": 0
}, - "lastUpdatedTime": {
- "max": 0,
- "min": 0
}
}, - "search": {
- "name": "string",
- "description": "string",
- "query": "some other"
}, - "limit": 100
}
{- "items": [
- {
- "id": 1,
- "externalId": "string",
- "name": "string",
- "isString": true,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "unit": "string",
- "assetId": 1,
- "isStep": true,
- "description": "string",
- "securityCategories": [
- 0
], - "dataSetId": 1,
- "createdTime": 0,
- "lastUpdatedTime": 0
}
]
}
Updates one or more time series. Fields outside of the request remain unchanged.
For primitive fields (those whose type is string, number, or boolean), use "set": value
to update the value; use "setNull": true
to set the field to null.
For JSON array fields (for example securityCategories
), use "set": [value1, value2]
to
update the value; use "add": [value1, value2]
to add values; use
"remove": [value1, value2]
to remove values.
List of changes.
required | Array of TimeSeriesUpdateById (object) or TimeSeriesUpdateByExternalId (object) (TimeSeriesUpdate) [ 1 .. 1000 ] items |
{- "items": [
- {
- "update": {
- "externalId": {
- "set": "string"
}, - "name": {
- "set": "string"
}, - "metadata": {
- "set": {
- "key1": "value1",
- "key2": "value2"
}
}, - "unit": {
- "set": "string"
}, - "assetId": {
- "set": 0
}, - "isStep": {
- "set": true
}, - "description": {
- "set": "string"
}, - "securityCategories": {
- "set": [
- 0
]
}, - "dataSetId": {
- "set": 0
}
}, - "id": 1
}
]
}
{- "items": [
- {
- "id": 1,
- "externalId": "string",
- "name": "string",
- "isString": true,
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "unit": "string",
- "assetId": 1,
- "isStep": true,
- "description": "string",
- "securityCategories": [
- 0
], - "dataSetId": 1,
- "createdTime": 0,
- "lastUpdatedTime": 0
}
]
}
Synthetic Time Series (STS) is a way to combine various input time series, constants and operators, to create completely new time series.
For example can we use the expression 24 * TS{externalId='production/hour'}
to convert from hourly to daily production rates.
But STS is not limited to simple conversions.
TS{id=123} + TS{externalId='hei'}
.sin(pow(TS{id=123}, 2))
.TS{id=123, aggregate='average', granularity='1h'}+TS{id=456}
To learn more about synthetic time series please follow our guide.
Execute an on-the-fly synthetic query
The list of queries to perform
required | Array of objects (SyntheticQuery) [ 1 .. 10 ] items |
{- "items": [
- {
- "expression": "(5 + TS{externalId='hello'}) / TS{id=123, aggregate='average', granularity='1h'}",
- "start": 0,
- "end": 0,
- "limit": 100
}
]
}
{- "items": [
- {
- "isString": false,
- "datapoints": [
- {
- "timestamp": 0,
- "value": 0
}
]
}
]
}
Event objects store complex information about multiple assets over a time period. For example, an event can describe two hours of maintenance on a water pump and some associated pipes, or a future time window where the pump is scheduled for inspection. This is in contrast with data points in time series that store single pieces of information about one asset at specific points in time (e.g., temperature measurements).
An event’s time period is defined by a start time and end time, both millisecond timestamps since the UNIX epoch. The timestamps can be in the future. In addition, events can have a text description as well as arbitrary metadata and properties.
Asset references obtained from an event - through asset ids - may be invalid, simply by the non-transactional nature of HTTP. They are maintained in an eventual consistent manner.
Rate and concurrency limits apply to some of the endpoints. If a request exceeds one of the limits,
it will be throttled with a 429: Too Many Requests
response. More on limit types
and how to avoid being throttled is described
here.
Following limits apply to the List events, Filter events, Aggregate events and Search events endpoints. These limits apply to all endpoints simultaneously, i.e. requests made to different endpoints are counted together. Please note the additional conditions that apply to the Aggregate events endpoint, as this endpoint provides the most resource-consuming operations.
Limit | Per project | Per user (identity) |
---|---|---|
Rate | 30 rps total out of which no more than 15 rps to Aggregate |
20 rps out of which no more than 10 rps to Aggregate |
Concurrency | 15 parallel requests out of which no more than 6 to Aggregate |
10 parallel requests out of which no more than 4 to Aggregate |
The aggregation API lets you compute aggregated results on events, such as getting the count of all Events in a project, checking different descriptions of events in your project, etc.
Filters behave the same way as for the Filter events
endpoint.
In text properties, the values are aggregated in a case-insensitive manner.
aggregateFilter
works similarly to advancedFilter
but always applies to aggregate properties.
For instance, in an aggregation for the source
property, only the values (aka buckets) of the source
property can be filtered out.
This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage.
It is a subject of the new throttling schema (limited request rate and concurrency).
Please check Events resource description for more information.
aggregate | string Value: "count" Type of aggregation to apply.
| ||||||||||||||||||||||||||||||
(BoolFilter (and (object) or or (object) or not (object))) or (LeafFilter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object))) (EventAdvancedFilter) A filter DSL (Domain Specific Language) to define advanced filter queries. See more information about filtering DSL here. Supported properties:
Note: Filtering on the | |||||||||||||||||||||||||||||||
object (EventFilter) Filter on events filter with exact match |
{- "aggregate": "count",
- "advancedFilter": {
- "or": [
- {
- "not": {
- "and": [
- {
- "equals": {
- "property": [
- "metadata",
- "severity"
], - "value": "medium"
}
}, - {
- "in": {
- "property": [
- "source"
], - "values": [
- "inspection protocol",
- "incident report"
]
}
}, - {
- "range": {
- "property": [
- "dataSetId"
], - "gte": 1,
- "lt": 10
}
}
]
}
}, - {
- "and": [
- {
- "equals": {
- "property": [
- "type"
], - "value": "equipment malfunction"
}
}, - {
- "equals": {
- "property": [
- "subtype"
], - "value": "mechanical failure"
}
}
]
}, - {
- "search": {
- "property": [
- "description"
], - "value": "outage"
}
}
]
}, - "filter": {
- "startTime": {
- "max": 0,
- "min": 0
}, - "endTime": {
- "max": 0,
- "min": 0
}, - "activeAtTime": {
- "max": 0,
- "min": 0
}, - "metadata": {
- "property1": "string",
- "property2": "string"
}, - "assetIds": [
- 1
], - "assetExternalIds": [
- "my.known.id"
], - "assetSubtreeIds": [
- {
- "id": 1
}
], - "dataSetIds": [
- {
- "id": 1
}
], - "source": "string",
- "type": "string",
- "subtype": "string",
- "createdTime": {
- "max": 0,
- "min": 0
}, - "lastUpdatedTime": {
- "max": 0,
- "min": 0
}, - "externalIdPrefix": "my.known.prefix"
}
}
{- "items": [
- {
- "count": 10
}
]
}
Creates multiple event objects in the same project. It is possible to post a maximum of 1000 events per request.
List of events to be posted. It is possible to post a maximum of 1000 events per request.
required | Array of objects (ExternalEvent) [ 1 .. 1000 ] items |
{- "items": [
- {
- "externalId": "my.known.id",
- "dataSetId": 1,
- "startTime": 0,
- "endTime": 0,
- "type": "string",
- "subtype": "string",
- "description": "string",
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "assetIds": [
- 1
], - "source": "string"
}
]
}
{- "items": [
- {
- "externalId": "my.known.id",
- "dataSetId": 1,
- "startTime": 0,
- "endTime": 0,
- "type": "string",
- "subtype": "string",
- "description": "string",
- "metadata": {
- "property1": "string",
- "property2": "string"
}, - "assetIds": [
- 1
], - "source": "string",
- "id": 1,
- "lastUpdatedTime": 0,
- "createdTime": 0
}
]
}
Deletes events with the given ids. A maximum of 1000 events can be deleted per request.
List of IDs to delete.
required | Array of InternalId (object) or ExternalId (object) (EitherId) [ 1 .. 1000 ] items |
ignoreUnknownIds | boolean Default: false Ignore IDs and external IDs that are not found |
{- "items": [
- {
- "id": 1
}
], - "ignoreUnknownIds": false
}
{ }
Retrieve a list of events in the same project. This operation supports pagination by cursor. Apply Filtering and Advanced filtering criteria to select a subset of events.
Advanced filter lets you create complex filtering expressions that combine simple operations,
such as equals
, prefix
, exists
, etc., using boolean operators and
, or
, and not
.
It applies to basic fields as well as metadata.
See the advancedFilter
attribute in the example.
See more information about filtering DSL here.
Leaf filter | Supported fields | Description |
---|---|---|
containsAll |
Array type fields | Only includes results which contain all of the specified values. {"containsAll": {"property": ["property"], "values": [1, 2, 3]}} |
containsAny |
Array type fields | Only includes results which contain at least one of the specified values. {"containsAny": {"property": ["property"], "values": [1, 2, 3]}} |
equals |
Non-array type fields | Only includes results that are equal to the specified value. {"equals": {"property": ["property"], "value": "example"}} |
exists |
All fields | Only includes results where the specified property exists (has value). {"exists": {"property": ["property"]}} |
in |
Non-array type fields | Only includes results that are equal to one of the specified values. {"in": {"property": ["property"], "values": [1, 2, 3]}} |
prefix |
String type fields | Only includes results which start with the specified value. {"prefix": {"property": ["property"], "value": "example"}} |
range |
Non-array type fields | Only includes results that fall within the specified range. {"range": {"property": ["property"], "gt": 1, "lte": 5}} Supported operators: gt , lt , gte , lte |
search |
["description"] |
Introduced to provide functional parity with /events/search endpoint. {"search": {"property": ["property"], "value": "example"}} |
The search
leaf filter provides functional parity with the /events/search
endpoint.
It's available only for the ["description"]
field. When specifying only this filter with no explicit ordering,
behavior is the same as of the /events/search/
endpoint without specifying filters.
Explicit sorting overrides the default ordering by relevance.
It's possible to use the search
leaf filter as any other leaf filter for creating complex queries.
See the search
filter in the advancedFilter
attribute in the example.
and
and or
clauses must have at least one elementproperty
array of each leaf filter has the following limitations:containsAll
, containsAny
, and in
filter values
array size must be in the range [1, 100]containsAll
, containsAny
, and in
filter values
array must contain elements of a primitive type (number, string)range
filter must have at least one of gt
, gte
, lt
, lte
attributes.
But gt
is mutually exclusive to gte
, while lt
is mutually exclusive to lte
.
For metadata, both upper and lower bounds must be specified.gt
, gte
, lt
, lte
in the range
filter must be a primitive valuesearch
filter value
must not be blank and the length must be in the range [1, 128]externalId
- 255description
- 128 for the search
filter and 255 for other filterstype
- 64subtype
- 64source
- 128metadata
key - 128By default, events are sorted by their creation time in the ascending order.
Use the search
leaf filter to sort the results by relevance.
Sorting by other fields can be explicitly requested. The order
field is optional and defaults
to desc
for _score_
and asc
for all other fields.
The nulls
field is optional and defaults to auto
. auto
is translated to last
for the asc
order and to first
for the desc
order by the service.
Partitions are done independently of sorting: there's no guarantee of the sort order between elements from different partitions.
See the sort
attribute in the example.
In case the nulls
attribute has the auto
value or the attribute isn't specified,
null (missing) values are considered to be bigger than any other values.
They are placed last when sorting in the asc
order and first when sorting in desc
.
Otherwise, missing values are placed according to the nulls
attribute (last or first), and their placement doesn't depend on the order
value.
Values, such as empty strings, aren't considered as nulls.
Use a special sort property _score_
when sorting by relevance.
The more filters a particular event matches, the higher its score is. This can be useful,
for example, when building UIs. Let's assume we want exact matches to be be displayed above matches by
prefix as in the request below. An event with the type fire
will match both equals
and prefix
filters and, therefore, have higher score than events with names like fire training
that match only the prefix
filter.
"advancedFilter" : {
"or" : [
{
"equals": {
"property": ["type"],
"value": "fire"
}
},
{
"prefix": {
"property": ["type"],
"value": "fire"
}
}
]
},
"sort": [
{
"property" : ["_score_"]
}
]
This endpoint is meant for data analytics/exploration usage and is not suitable for high load data retrieval usage. It is a subject of the new throttling schema (limited request rate and concurrency). Please check Events resource description for more information.
object (EventFilter) Filter on events filter with exact match | |||||||||||||||||||||||||||||||
(BoolFilter (and (object) or or (object) or not (object))) or (LeafFilter (equals (object) or in (object) or range (object) or prefix (object) or exists (object) or containsAny (object) or containsAll (object) or search (object))) (EventAdvancedFilter) A filter DSL (Domain Specific Language) to define advanced filter queries. See more information about filtering DSL here. Supported properties:
Note: Filtering on the | |||||||||||||||||||||||||||||||
limit | integer <int32> [ 1 .. 1000 ] Default: 100 Limits the maximum number of results to be returned by a single request. In case there are more results to the request, the 'nextCursor' attribute will be provided as part of the response. Request may contain less results than the request limit. | ||||||||||||||||||||||||||||||
Array of modern (objects) or Array of deprecated (strings) | |||||||||||||||||||||||||||||||
cursor | string | ||||||||||||||||||||||||||||||
partition | string (Partition) < |