API Reference

Config

A Config object is required connect to Conservator. There are a variety of ways to create an instance of config.

In general, use Config.default().

class FLIR.conservator.config.Config(**kwargs)

Contains a user’s API Key (token) and other settings, to be used when authenticating operations on an instance of Conservator at a certain URL.

Config attribute names (environment variables, dictionary keys):
  • CONSERVATOR_API_KEY

  • CONSERVATOR_URL

  • CONSERVATOR_MAX_RETRIES (default: 5)

  • CONSERVATOR_CVC_CACHE_PATH (default: .cvc/cache)

Parameters:

kwargs – A dictionary of (str: str) providing values for all of the Config attributes. Any attribute not in the dictionary, will use the default value. If no default value is defined, an error is raised.

static default(save=True)

Gets the default config. This works by iterating through the various credential sources, and returning the first one that works. Sources are queried in this order:

  • Environment variables

  • Config file

  • User input

Parameters:

save – If True and the source is stdin, save the config for future use. This means a user won’t need to type them again.

static default_config_path()

The default config is saved in ~/.config/conservator-cli/default.json.

static delete_saved_default_config()

Delete the default saved config, if it exists.

static delete_saved_named_config(name)

Delete the config named name, if it exists.

static from_default_config_file()

Creates a Config object from the JSON config file at the Config.default_config_path().

static from_dict(data)

Construct a Config object from a dict

static from_environ()

Creates a Config object from environment variables.

static from_file(path)

Creates a Config object from a JSON config file.

Note

For security, this file’s mode will be set to 600.

Parameters:

path – The path to the JSON config file.

static from_input()

Creates a Config object from standard input.

static from_name(name)

Create a config object by config name

classmethod from_named_config_file(name)

Create a config object from a named config file

static named_config_path(name)

Configs are saved in ~/.config/conservator-cli/ as name.json.

save_to_default_config()

Saves the Config to the Default config file, meaning this config will be loaded by Config.default().

save_to_file(path)

Saves the Config to as JSON

This file can be loaded using Config.from_file().

Note

For security, this file’s mode will be set to 600.

Parameters:

path – The file path to save to.

save_to_named_config(name)

Saves the Config to the named config file, meaning this config can be loaded by Config.from_name().

static saved_config_names()

Returns a list of config names

to_dict()

Return config object as a dict

exception FLIR.conservator.config.ConfigError
FLIR.conservator.config.validate_cache_path(config_dict)

Validates cache path value in a config

FLIR.conservator.config.validate_key(config_dict)

Validates API Key value in a config

FLIR.conservator.config.validate_max_retries(config_dict)

Validates max retries value in a config

FLIR.conservator.config.validate_url(config_dict)

Validates URL value in a config

Conservator

class FLIR.conservator.conservator.Conservator(config)

Bases: ConservatorConnection

Conservator is the highest level class of this library. It will likely be the starting point for all queries and operations.

You can get an instance using the default Config:

>>> Conservator.default()
<Conservator at https://flirconservator.com>

You can create also create an instance by passing any Config object.

Parameters:

config – The Config object to use for this connection.

static create(config_name=None, save=True)

Returns a Conservator using named config if given, and otherwise creates a default instance via Conservator.default().

static default(save=True)

Returns a Conservator using Config.default().

static from_config_dict(config_dict)

Returns a Conservator using a config constructed from the supplied dict

static generate_id()

Generate a new ID.

The ID consists of ID_LENGTH characters from the ID_CHARSET. The beginning of the ID is based on the current time, and the remaining characters are random.

get_media_instance_from_id(media_id, fields=None)

Returns a Video or Image object from an ID. These types are checked in this order, until id_exists() returns True. If neither matches, an UnknownMediaIdException is raised.

get_user()

Returns the User that the provided API token authorizes

static is_valid_id(id_)

Validate that the supplied ID is valid

exception FLIR.conservator.conservator.UnknownMediaIdException

Bases: Exception

Raised when a media ID cannot be resolved to a Video or Image.

Conservator Connection

class FLIR.conservator.connection.ConservatorConnection(config)

Acts as an intermediary between SGQLC and a remote Conservator instance.

Parameters:

configConfig providing Conservator URL and user authentication info.

dvc_hash_exists(md5)

Returns True if DVC contains the given md5 hash, and False otherwise.

get_authenticated_url()

Returns an authenticated URL that contains an encoded username and token.

This URL is used when downloading files and repositories.

get_collection_url(collection)

Returns a URL for viewing collection.

get_domain()

Returns the domain name of the Conservator instance.

get_dvc_hash_url(md5)

Returns the DVC URL for downloading content with the given md5 hash.

get_dvc_url()

Returns the DVC URL used for downloading files.

get_email()

Returns the current User’s email

get_url()

Returns the base URL for Conservator.

get_url_encoded_user()

Returns the encoded email-key combination used to authenticate URLs.

query(query, operation_base=None, fields=None, **kwargs)

Provides an alternative way to prepare and run SGQLC operations.

Parameters:
  • query – The SGQLC query to run.

  • operation_base – Not required. Included for backwards-compatibility.

  • fields – A FLIR.conservator.fields_request.FieldsRequest of the fields to include (or exclude) in the results.

  • kwargs – These named parameters are passed as arguments to the query.

run(operation, variables=None)

Runs an SGQLC operation on the remote instance, and returns the response, if there were no errors.

If any errors are encountered, they will be raised with a ConservatorGraphQLServerError.

static to_graphql_url(url)

Ensure a URL is formatted correctly (i.e. ends with “/graphql”)

exception FLIR.conservator.connection.ConservatorGraphQLServerError(operation, errors)

There was a problem with a GraphQL query, and it’s unclear what the cause was.

Parameters:
  • operation – The SGQLC operation that caused the error.

  • errors – A list of errors returned by the server.

exception FLIR.conservator.connection.ConservatorMalformedQueryException

There was a problem with a GraphQL query, and it’s the client’s fault.

File Transfers

class FLIR.conservator.file_transfers.ConservatorFileTransfers(conservator)

Bundles methods for uploading and downloading files to Conservator.

These methods cannot be standalone utilities because in some deployments URLs will be relative to the base Conservator URL. Therefore, all download and upload operations need to have a reference to the Conservator instance.

download(url, local_path, no_meter=False, max_retries=5)

Download the file from Conservator url to the local_path.

download_if_missing(url, local_path, expected_md5, no_meter=False)

Check that a file exists at local_path with the expected_md5 hash. If it doesn’t, download it from url.

download_many(downloads, process_count=None, no_meter=False)

Download a list of DownloadRequest in parallel.

Parameters:
  • downloads – The list of DownloadRequest to download.

  • process_count – The number of concurrent downloads. If None, uses os.cpu_count().

  • no_meter – If True, hide the progress bar.

full_url(url)

Converts a url from Conservator into a full URL (protocol, domain, etc.) that can be used for uploading or downloading.

upload(url, local_path, max_retries=5)

Upload the file at local_path to Conservator url.

upload_many(uploads, process_count=None, no_meter=False)

Upload a list of UploadRequest in parallel.

Parameters:
  • uploads – The list of UploadRequest to download.

  • process_count – The number of concurrent uploads. If None, uses os.cpu_count().

  • no_meter – If True, hide the progress bar.

class FLIR.conservator.file_transfers.DownloadRequest(url, local_path, expected_md5=None)

For use with ConservatorFileTransfers.download_many().

A request to download from url to local_path. If expected_md5 is given, check the file doesn’t already exist with the correct hash before downloading.

exception FLIR.conservator.file_transfers.FileDownloadException

Something went wrong when downloading a file.

exception FLIR.conservator.file_transfers.FileTransferException

Something went wrong when uploading or downloading a file.

exception FLIR.conservator.file_transfers.FileUploadException

Something went wrong when uploading a file.

class FLIR.conservator.file_transfers.UploadRequest(url, local_path)

For use with ConservatorFileTransfers.upload_many().

A request to upload local_path to url.

Fields Manager

class FLIR.conservator.fields_manager.FieldsManager

Defines default fields for each SGQLC type.

classmethod select_default_fields(selector)

Adds the default fields to the selector based on its type.

For basic types with only scalars, all fields are included. For more complicated types, “useful” scalars are retained, with a few exceptions. In general, you should request only the specific fields you need. The fields included here may change at any time.

See source for which fields are included.

Fields Request

See class comment below for details

class FLIR.conservator.fields_request.FieldsRequest(paths=None)

A collection of fields to include in a query. Many different API calls will require specifying a list of fields using a FieldsRequest.

classmethod create(fields)

Create a FieldsRequest. Any function that accepts fields will eventually use this method to convert fields to a valid FieldsRequest.

This accepts a variety of types:

Pass through:

>>> assert isinstance(fields_request, FieldsRequest)
>>> FieldsRequest.create(fields_request)

A single field:

>>> FieldsRequest.create("name")

A list of fields:

>>> FieldsRequest.create(["name", "owner", "url"])

A dictionary of fields, to include and exclude fields:

>>> FieldsRequest.create({"name": True, "owner": True, "url": False})

If a key’s value is a dictionary, it is used as arguments to query that fields:

>>> FieldsRequest.create({"frames": {"page": 0, "limit": 100}})

This is the only way to add arguments to a field.

These examples only demonstrate immediate child fields on an object. You may also specify subfields using a period (.) as a separator:

>>> FieldsRequest.create("children.name")

If one of the fields in a subpath needs arguments, it must be explicitly listed:

>>> FieldsRequest.create({"frames.url" : True, "frames": {"page": 0, "limit": 100}})

If no fields are included in a request, the field listed in FieldsManager will be requested. This applies to subfields–so in the following example, the default fields of Video will be requested:

>>> FieldsRequest.create(["name", "videos"])

But, if at least one subfield is requested, only that field will be requested and the defaults will be ingored:

>>> FieldsRequest.create(["name", "videos.name"])

The same logic applies to the root object. If no specific fields are included, the default fields defined in FieldsManager are included. Care must be taken when defining default fields that no circular type dependencies are created.

Excluded fields (falsey dict values) override included fields, but do not affect defualt fields at this time. Please submit an issue if you need to exclude default fields. Including specific fields should always be preferred to relying on defaults.

Note: This design mirrors SGQLC’s field selection syntax

(specifically see __fields__).

exclude(*field_path)

Excludes field_path from the request.

exclude_field(*field_path)

Excludes field_path from the request.

exclude_fields(field_paths)

Excludes field_paths from the request.

include(*field_path)

Includes field_path in the request.

include_field(*field_path)

Includes field_path in the request.

include_fields(field_paths)

Includes field_paths in the request.

Wrappers and TypeProxies

This modules contains types that wrap generated SGQLC types.

These wrapped types all extend TypeProxy, and contain a private instance of the underling SGQLC object. Initialized fields can be accessed on the TypeProxy instance. TypeProxy also provides functions to fetch new fields from Conservator.

Often they will include additional functions that wrap SGQLC queries. For instance, a Collection has get_datasets(), which runs the GraphQL query for a collection’s datasets, and returns them as proxied Dataset objects.

TypeProxy

class FLIR.conservator.wrappers.type_proxy.ListTypeProxy(iterable=(), /)

Identical to built-in list, except it provides to_json(). This ensures all types returned by queries have a to_json().

to_json()

Returns a list suitable for turning into JSON.

exception FLIR.conservator.wrappers.type_proxy.MissingFieldException

Raised when a field can’t be populated, but is required for an operation.

class FLIR.conservator.wrappers.type_proxy.TypeProxy(conservator, instance)

Wraps an SGQLC object. Fields of the underlying instance can be accessed on this instance. Subclasses can add class and instance methods to add functionality.

When you attempt to access a field, we first check that it exists on the underlying instance. If it doesn’t, an AttributeError will be raised.

If it does exists, we check to see if a subclass of TypeProxy is defined with a matching underlying_type. If it does, an instance of that subclass is returned with the value. Otherwise, a instance of the generic TypeProxy is returned. This ensures all values returned by queries have the same basic methods (like TypeProxy.to_json()). The type look-up is compatible with optional and list types.

Instances should be created using TypeProxy.wrap(), which will use the appropriate class and constructor.

Parameters:
  • conservator – The instance of Conservator that created the underlying instance.

  • instance – The SGQLC object to wrap, usually returned by running some query.

classmethod from_id(conservator, id_)

Return a wrapped instance from an ID. The underlying type of the class should match the type of the ID.

This does not populate any fields besides id. You must call populate() on the returned instance to populate any fields.

Note

Use id_exists() to verify that an ID is correct. Otherwise an InvalidIdException may be thrown on later operations.

classmethod from_json(conservator, json)

Return a wrapped instance from a dictionary (usually produced by calling to_json()). The ID should be included for the returned instance to be useful.

static get_wrapping_type(type_)

Gets the TypeProxy subclass with an underlying_type related to type_. If one doesn’t exist, returns generic TypeProxy.

This checks the base type. For instance, it will match [Video]! with Video.

static has_base_type(base_type, type_)

Returns True if type_ extends base_type in the SGQLC type hierarchy.

For instance, a [Collection] has base type Collection.

has_field(path)

Returns True if the current instance has initialized the specified path.

This is frequently used to test if a call to populate is required, or to verify that a populate call worked.

to_json()

Returns the underlying instance as a dictionary, suitable for turning into JSON.

static wrap(conservator, type_, instance)

Creates a new TypeProxy instance of the appropriate subclass, or a generic if no subclass exists for the type. Scalar types, such as None, str, bool, etc., are not wrapped and returned as-is.

Parameters:
  • conservator – Conservator instance tied to the instance. Subclasses use this for many instance methods.

  • type – The SGQLC type of the instance to wrap.

  • instance – The SGQLC object to wrap.

FLIR.conservator.wrappers.type_proxy.requires_fields(*fields)

Decorator for requiring fields for an instance method. If missing, calls populate. If populate fails, raises MissingFieldException.

This should be used on any instance method that requires certain fields to function correctly.

Parameters:

fields – Strings containing the names of required fields. They can be subfields (such as “repository.master” on a Dataset).

Queryable

exception FLIR.conservator.wrappers.queryable.InvalidIdException

Raised when a query fails due to an invalid ID.

class FLIR.conservator.wrappers.queryable.QueryableType(conservator, instance)

Adds populate() for querying and populating additional fields.

Subclasses must define by_id_query to be a query that can return more fields of the type given an id. Alternatively, they may define a custom _populate method if the method of querying is unique.

populate(fields=None)

Query conservator for the specified fields, even if they already exist on the object.

To filter existing fields, use requires_fields()

populate_all()

Deprecated since version 1.0.2: This no longer queries all fields, instead only selecting the defaults, which is equivalent to calling populate() with no arguments.

Project

class FLIR.conservator.wrappers.project.Project(conservator, instance)
classmethod create(conservator, name, fields=None)

Creates a new project with the given name, and returns it with the specified fields.

Note that this requires the privilege to create projects.

delete()

Delete the project.

underlying_type

alias of Project

Collection

class FLIR.conservator.wrappers.collection.Collection(conservator, instance)
create_child(name, fields=None)

Create a new child collection with the given name, returning it with the specified fields.

create_dataset(name, fields=None)

Creates a dataset in the current folder/project

classmethod create_from_remote_path(conservator, path, fields=None)

Return a new collection at the specified path, with the given fields, creating new collections as necessary.

If the path already exists, raises RemotePathExistsException.

classmethod create_root(conservator, name, fields=None)

Create a new root collection with the specified name and return it with the specified fields.

This requires your account to have privilege to create new projects.

delete()

Delete the collection.

download(path=None, include_datasets=False, include_metadata=False, include_associated_files=False, include_media=False, overwrite_datasets=False, preview_videos=False, recursive=False)

Downloads this collection to the path specified, with the specified assets included. If path is None or not given, downloaded to a directory with the name of the collection.

download_datasets(path, no_meter=False, overwrite=False)

Clones and pulls all datasets in the collection.

download_images(path, no_meter=False)

Downloads images to given path.

download_media(path, preview_videos=False, no_meter=False)

Downloads videos and images. If preview_videos is set, download preview videos in place of full videos.

download_metadata(path)

Downloads image and video metadata to media_metadata/.

download_videos(path, preview_videos=False, no_meter=False)

Downloads videos to given path. If preview_videos is set, download preview videos in place of full videos.

classmethod from_remote_path(conservator, path, make_if_no_exist=False, fields=None)

Returns a collection at the specified path, with the specified fields. If make_if_no_exist is True, then collection(s) will be created to reach that path if it doesn’t exist.

get_child(name, make_if_no_exists=False, fields=None)

Returns the child collection with the given name and specified fields.

If it does not exist, and make_if_no_exists is True, it will be created.

get_datasets(fields=None, search_text='')

Returns a query for all datasets in this collection.

get_images(fields=None, search_text='')

Returns a query for all images in this collection.

get_media(fields=None, search_text='')

Yields all videos, then images in this collection.

Parameters:

fields – The fields to include in the media. All fields must exist on both Image and Video types.

get_videos(fields=None, search_text='')

Returns a query for all videos in this collection.

move(parent)

Move the collection into another collection.

recursively_get_children(include_self=False, fields=None)

Yields all child collections recursively.

Parameters:
  • include_self – If True, yield this collection too.

  • fields – The fields to populate on children.

recursively_get_images(fields=None, search_text='')

Yields all images in this and child collections recursively

recursively_get_media(fields=None, search_text='')

Yields all videos and images in this and child collections recursively

recursively_get_videos(fields=None, search_text='')

Yields all videos in this and child collections recursively

remove_media(media_id)

Remove given media from this collection.

underlying_type

alias of Collection

exception FLIR.conservator.wrappers.collection.InvalidRemotePathException
exception FLIR.conservator.wrappers.collection.RemotePathExistsException

Dataset

class FLIR.conservator.wrappers.dataset.Dataset(conservator, instance)
add_frames(frames, fields=None, overwrite=False)

Given a list of frames, add them to the dataset. If overwrite is True and the frame was already in the dataset, the dataset frame attributes will be replaced with the source frame attributes.

add_frames_with_associations(frames, associated_frame_table, fields=None, overwrite=False)

Given a list of frames, add them to the dataset and associate them with the frames found in associated_frame_table. If overwrite is True and the frame was already in the dataset, the dataset frame attributes will be replaced with the source frame attributes.

Parameters:
  • frames – A list of Frame objects to be added to the dataset.

  • associated_frame_table – A dictionary mapping source video frame IDs to a list of AddAssociatedFrameInput objects. Each AddAssociatedFrameInput object can refer to either a video frame ID or a dataset frame ID, but not both at once.

associate_frame(dataset_frame_id, associated_frame_input)

Associate the given dataset frame ID with another frame specified in associated_frame_input.

Parameters:
  • dataset_frame_id – The ID of a dataset frame to associate with another frame.

  • associated_frame_input – An AddAssociatedFrameInput object, which references either another dataset frame ID or a video frame ID, but not both.

commit(message)

Commits changes to the dataset made outside of CVC/Git system (for instance, using the Web UI, or most methods within this class). The current user will be the author of the commit.

Parameters:

message – The commit message.

classmethod create(conservator, name, collections=None, fields=None)

Create a dataset with the given name, including the given collections, if specified. The dataset is returned with the requested fields.

delete()

Delete the dataset.

download_blob(blob_id, path)

Download a blob to the specified path. A blob_id can be gotten using get_tree_by_id().

download_blob_by_name(filename, path, commit_id='HEAD')

Download a blob to the specified path. A blob_id can be gotten using get_tree_by_id().

If path is a file, the blob will be saved to that file. If it is a directory, the blob will be saved to a file named filename within the directory at path.

download_latest_index(path)

Downloads the Dataset’s latest index.json file to the specified path. If the path is a directory, the file will be downloaded to index.json within that directory.

This can be used as a faster alternative to a full repository clone for some operations.

download_metadata(path)

Downloads metadata to path/name.json, where name is the dataset’s name.

classmethod from_local_path(conservator, path='.')

Returns a new Dataset instance using the ID found in index.json at the provided path

generate_metadata()

Queries Conservator to generate metadata for the dataset.

get_blob_id_by_name(filename, commit_id='HEAD')

Returns a blob’s id (hash) from filename. This searches the root directory of the given commit_id, and then searches associated_files. It returns the hash of the first blob found with a matching name.

get_blob_url_by_id(blob_id)

Returns a URL that can be used to download a blob. A blob_id can be gotten using get_tree_by_id().

get_commit_by_id(commit_id='HEAD', fields=None)

Returns a specific commit from a commit_id. The ID can be a hash, or an identifier like HEAD.

get_commit_history(fields=None)

Returns a list of version control commits for the Dataset. Note that some older datasets may not have a repository, causing this method to fail.

get_frames(search_text='', fields=None)

Returns a paginated query for dataset frames within this dataset, filtering with search_text.

get_frames_reversed(search_text='', fields=None)

Returns a paginated query for dataset frames within this dataset, filtering with search_text in reverse order.

get_git_url()

Returns the Git URL used for cloning this Dataset.

get_root_tree_id(commit_id='HEAD')

Returns the id (hash) of the tree at the given commit_id. The ID can be a hash, or an identifier like HEAD.

Defaults to the latest commit.

get_tree_by_id(tree_id='HEAD', fields=None)

Returns a tree from a tree_id. The ID can be a hash, or an identifier like HEAD.

remove_frames(frames, fields=None)

Given a list of frames remove them from the dataset. Detects whether list contains video Frames or DatasetFrames, but will fail if you mix both types together in the same list.

underlying_type

alias of Dataset

wait_for_dataset_commit()

Wait for the server to create the first commit to a new dataset.

wait_for_history_len(num_expected_commits, max_tries=10)

Waits until the number of commits in Dataset’s history is at least the requested number. Intended as heuristic for checking whether a recent commit has finished processing on the server, though it could be misleading if multiple commits are being pushed to the dataset from different sources (e.g. if local clone and web UI are being used to make changes in parallel)

Media

class FLIR.conservator.wrappers.media.MediaCompare(value)

Results of comparing local file and Conservator file

class FLIR.conservator.wrappers.media.MediaType(conservator, instance)

A media type is an image or a video. It can be uploaded (using upload()) or downloaded.

compare(local_path)

Use md5 checksums to compare media contents to local file

Returns result as MediaCompare object

Parameters:

local_path – Path to local copy of file for comparison with remote.

download(path, no_meter=False)

Download media to path.

get_all_frames_paginated(fields=None)

Yields all frames in the video, 15 at a time This is only useful if you’re dealing with very long videos and want to paginate frames yourself. If the video is short, you could just use get_frames() to get all frames.

get_annotations(fields=None)

Returns a list of the media’s annotations with the specified fields.

get_frame_by_index(index, fields=None)

Returns a single frame at a specific index in the video.

remove()

Remove a video or image from conservator. Note that this is a permanent action that will remove it from all collections.

static upload(conservator, upload_request)

Upload a new media object based on info in MediaUploadRequest object: * from a local file_path * to conservator name remote_name * as member of collection if given, otherwise added to no collection (orphan)

Conservator Images have separate queries than Videos, but they do not get their own mutations, e.g. they are treated as “Videos” in the upload process. In fact, an uploaded media file is treated by Conservator server as a video until file processing has finished; if it turned out to be an image type (e.g. jpeg) then it will disappear from Videos and appear under Images.

Returns updated MediaUploadRequest, which contains ID of the created media object (may be a Video ID or an Image ID) or else an error message if something went wrong.

static verify_md5(local_path, expected_md5)

Helper for Video and Image md5sum comparisons, each of which track md5sum in Conservator but not in the same field

exception FLIR.conservator.wrappers.media.MediaUploadException

Raised if an exception occurs during a media upload

class FLIR.conservator.wrappers.media.MediaUploadRequest(file_path: str, collection_id: str = '', remote_name: str = '', complete: bool = False, media_id: str = '', error_message: str = '')

Tracks inputs and results of a media upload

Image

class FLIR.conservator.wrappers.image.Image(conservator, instance)
get_frame(fields=None)

Get the frame of the Image. Because images only have one frame, this is the same as MediaType.get_frame_by_index() with index 0.

underlying_type

alias of Image

Video

class FLIR.conservator.wrappers.video.Video(conservator, instance)
get_frames(fields=None)

Get the video’s frames

underlying_type

alias of Video

Frame

class FLIR.conservator.wrappers.frame.Frame(conservator, instance)

A frame within a media object (image or video).

add_annotations(annotation_create_list, fields=None)

Adds annotations using the specified list of AnnotationCreate objects.

Returns a list of the added annotations, each with the specified fields.

add_prediction(prediction_create, fields=None)

Adds a prediction using the specified prediction_create object.

Returns the added prediction with the specified fields.

download(path, no_meter=False)

Download media under the directory path. The filename will be [media id]-[frame index].jpg, where media id is the id of the media this frame is a part of, and frame index is zero padded to 6 digits.

set_annotation_metadata(annotation_id: str, annotation_metadata: str, fields=None)

Set custom metadata on a video annotation

underlying_type

alias of Frame

DatasetFrame

class FLIR.conservator.wrappers.dataset_frame.DatasetFrame(conservator, instance)

A frame within a dataset.

add_dataset_annotations(dataset_annotation_create_list, fields=None)

Adds annotations using the specified list of CreateDatasetAnnotationInput objects.

Returns a list of the added annotations, each with the specified fields.

approve()

Approve the dataset frame.

approve_dataset_annotation(dataset_id, annotation_id)

Approve an annotation within dataset frame.

flag()

Flag the dataset frame.

mark_empty()

Mark the dataset frame as empty.

request_changes()

Request changes to the dataset frame.

request_changes_annotation(dataset_id, annotation_id)

Request changes to an annotation within dataset frame.

set_dataset_annotation_metadata(annotation_id: str, annotation_metadata: str, fields=None)

Set custom metadata on a dataset annotation

underlying_type

alias of DatasetFrame

unflag()

Unflag the dataset frame.

unmark_empty()

Unmake the dataset frame as empty.

unset_qa_status()

Unset the QA status of the dataset frame.

unset_qa_status_annotation(dataset_id, annotation_id)

Unset the QA status of an annotation within dataset frame.

update_dataset_annotation(annotation_input, annotation_id, fields=None)

Update existing annotation with ID annotation_id, using the specified UpdateAnnotationInput object.

Returns updated annotation with the specified fields.

update_qa_status_note(qa_status_note: str)

Change the QA status note for a dataset frame.

update_qa_status_note_annotation(dataset_id, qa_status_note: str, annotation_id)

Change the QA status note for an annotation within dataset frame.

Metadata

class FLIR.conservator.wrappers.metadata.MetadataType(conservator, instance)

Adds download_metadata() and upload_metadata()

Every Metadata file belongs to a parent, and the relevant mutations to create such a file depend on parent type. The parent object’s class is responsible for specifying the mutation as ‘metadata_gen_url’ and ‘metadata_confirm’ members, and also an ‘id_type’ member that names the id arg for those mutations (e.g. “dataset_id” or “video_id”).

Arguments and return values for the Metadata mutations:

create - args = parent_id, filename, and content_type - return = string signed_url, string url confirm - args = parent_id, signed_url - return = boolean success

download_metadata(path)

Downloads the metadata field to path/filename.json, where filename is the media’s filename.

upload_metadata(file_path, content_type=None)

Uploads file to Conservator as associated file of this Metadata’s parent object

exception FLIR.conservator.wrappers.metadata.MetadataUploadException

Raised if an exception occurs during a metadata upload

FileLocker

class FLIR.conservator.wrappers.file_locker.FileLockerType(conservator, instance)

Adds download_associated_files(), upload_associated_file(), remove_associated_file()

Every FileLocker file is associated with a Conservator object, and the relevant mutations to create or remove such a file depend on object type. The subclass is responsible for specifying those mutations as ‘file_locker_gen_url’ and ‘file_locker_remove’ members, and also an ‘id_type’ member that names the id arg for those mutations (e.g. “dataset_id” or “video_id”).

Arguments and return values for the FileLocker mutations:

create - args = parent_id, filename, and content_type - return = string signed_url, string url remove - args = parent_id, filename - return = instance of parent object type

download_associated_files(path, no_meter=False)

Downloads associated files (from file locker) to associated_files/.

remove_associated_file(filename)

Removes named file from set of associated files for this FileLocker’s parent object

upload_associated_file(file_path, content_type=None)

Uploads file to Conservator as associated file of this FileLocker’s parent object

exception FLIR.conservator.wrappers.file_locker.FileLockerUploadException

Raised if an exception occurs during a file-locker upload

Managers

A Manager is simply a bundle of utilities for querying a specific type.

class FLIR.conservator.managers.CollectionManager(conservator)

Bases: SearchableTypeManager

create_from_parent_id(name, parent_id, fields=None)

Create a new child collection named name, under the parent collection with the given parent_id, and return it with the given fields.

create_from_path(path, fields=None)

Deprecated since version 1.1.0: Use create_from_remote_path() instead.

create_from_remote_path(path, fields=None)

Return a new collection at the specified path, with the given fields, creating new collections as necessary.

If the path already exists, raises RemotePathExistsException.

create_root(name, fields=None)

Create a new root collection with the specified name and return it with the specified fields.

from_remote_path(path, make_if_no_exist=False, fields=None)

Returns a collection at the specified path, with the specified fields. If make_if_no_exist is True, then collection(s) will be created to reach that path.

from_string(string, fields='id')

Returns a Collection with the given fields from a string. If the string contains any slashes, it is assumed to be a path, and from_remote_path() is used. Otherwise, from_id is used.

This is used for CLI commands to let users specify paths or ids when doing operations, and should be the preferred method for getting a Collection from any user-facing input.

upload(collection_id, path, video_metadata, associated_files, media, recursive, resume_media, max_retries)

Upload files under the specified path to given collection. max_retries only applies to media files (other types of files are usually small enough not to need them).

class FLIR.conservator.managers.DatasetManager(conservator)

Bases: SearchableTypeManager

create(name, collections=None, fields=None)

Create a dataset with the given name, including the given collections, if specified. The dataset is returned with the requested fields.

from_local_path(path='.')

Create a Dataset from a path containing an index.json file.

from_string(string, fields='id')

Returns a Dataset with the given fields from a string. If the string contains any slashes (for instance /path/to/some/collection/dataset_name), it is assumed to be a path, and the parent directory of the path will be fetched, and its datasets will be searched for an exact match on dataset name.

Next, we check if string matches any single dataset name exactly using by_exact_name().

Otherwise, we assume string is an ID, and use from_id().

This is used for CLI commands to let users specify paths or ids when doing operations, and should be the preferred method for getting a Dataset from any user-facing input.

class FLIR.conservator.managers.ImageManager(conservator)

Bases: SearchableTypeManager, MediaTypeManager

from_string(string, fields='id')

Returns an Image with the given fields from a string. If the string contains any slashes (for instance /path/to/some/collection/image_name), it is assumed to be a path, and the parent directory of the path will be fetched, and its images will be searched for an exact match on dataset name.

Next, we check if string matches any single image name exactly using by_exact_name().

Otherwise, we assume string is an ID, and use from_id().

This is used for CLI commands to let users specify paths or ids when doing operations, and should be the preferred method for getting an Image from any user-facing input.

class FLIR.conservator.managers.ProjectManager(conservator)

Bases: SearchableTypeManager

create(name, fields=None)

Create a new project with the given name, and return it with the specified fields.

class FLIR.conservator.managers.VideoManager(conservator)

Bases: SearchableTypeManager, MediaTypeManager

from_string(string, fields='id')

Returns a Video with the given fields from a string. If the string contains any slashes (for instance /path/to/some/collection/video_name), it is assumed to be a path, and the parent directory of the path will be fetched, and its videos will be searched for an exact match on dataset name.

Next, we check if string matches any single video name exactly using by_exact_name().

Otherwise, we assume string is an ID, and use from_id().

This is used for CLI commands to let users specify paths or ids when doing operations, and should be the preferred method for getting a Video from any user-facing input.

TypeManager

exception FLIR.conservator.managers.type_manager.AmbiguousIdentifierException(identifier)
class FLIR.conservator.managers.type_manager.TypeManager(conservator, underlying_type)

Base Manger class.

Parameters:
  • conservator – Conservator instance to use for queries.

  • underlying_type – Underlying TypeProxy class to wrap.

from_id(id_)

Creates a new instance of underlying_type from an ID.

This does not populate any fields besides id. You must call populate() on the returned instance to populate any fields.

Note

Use id_exists() to verify that an ID is correct. Otherwise an InvalidIdException may be thrown on later operations.

from_json(json)

Return a wrapped instance from a dictionary (usually produced by calling to_json()). The ID should be included for the returned instance to be useful.

from_string(string, fields=None)

This returns an instance from a string identifier.

By default, it expects an ID, but subclasses can (and should) add alternative identifiers. For instance, collections can be identified by their path, so the collections manager should be checking if the identifier is a path.

Invalid identifiers should raise helpful exceptions.

id_exists(id_)

Returns True if the id is valid for the underlying_type.

SearchableTypeManager

class FLIR.conservator.managers.searchable.SearchableTypeManager(conservator, underlying_type)

Bases: TypeManager

Adds the ability to search using Conservator’s Advanced Search.

The underlying type must specify a search_query.

Most queries return a FLIR.conservator.paginated_query.PaginatedQuery.

all()

Searches for all instances

by_exact_name(name, fields=None)

Returns a search for an exact name.

Convert the returned query to a list, and check length to determine if a single match was found (or none, or many).

count(search_text='')

Returns the number of instances that are returned by search_text

count_all()

Returns total number of instances

search(search_text, **kwargs)

Performs a search with the specified search_text.

MediaTypeManager

class FLIR.conservator.managers.media.MediaTypeManager(conservator)

Bases: object

Base class for media type managers.

is_uploaded_media_id_processed(media_id)

Return True if an ID returned by upload() has been processed, and False otherwise.

When media is uploaded, it begins processing as a video. It may turn into an image, requiring different queries. This method can be used to verify that an ID is done processing, and its type won’t change in the future.

upload(file_path, collection=None, remote_name=None)

Upload a new media object from a local file_path, with the specified remote_name. It is added to collection if given, otherwise it is added to no collection (orphan).

Conservator Images have separate queries than Videos, but they do not get their own mutations, e.g. they are treated as “Videos” in the upload process. In fact, an uploaded media file is treated by Conservator server as a video until file processing has finished; if it turned out to be an image type (e.g. jpeg) then it will disappear from Videos and appear under Images.

Returns the ID of the created media object. Note, that it may be a Video ID or an Image ID.

Parameters:
  • file_path – The local file path to upload.

  • collection – If specified, the Collection object, or str Collection ID to upload the media to. If not specified, the media is not uploaded to any specific collection (orphan).

  • remote_name – If given, set the name for the media file on the server.

upload_many_to_collection(file_paths, collection, process_count=None, resume=False, max_retries=-1)

Upload many files in parallel. Returns a list of the uploaded media IDs.

Parameters:
  • file_paths – A list of str file paths to upload to collection. Alternatively, a list of (str, str) tuples holding pairs of local_path, remote_name. If remote_name is None, the local filename will be used.

  • collection – The collection to upload media files to.

  • process_count – Number of concurrent upload processes. Passing None will use os.cpu_count().

  • resume – Whether to check first if file was previously uploaded.

  • max_retries – max number of upload retries per file in case of network errors.

wait_for_processing(media_ids, timeout_seconds=600, check_frequency_seconds=5)

Wait for an id, or list of ids, to complete processing.

Parameters:
  • media_ids – A single str media ID, or a list of media ID to test.

  • timeout_seconds – A maximum amount of time to wait.

  • check_frequency_seconds – How long to wait between checks.

exception FLIR.conservator.managers.media.ProcessingTimeoutError

Bases: TimeoutError

Raised when the amount of time spent waiting for media to process exceeds the requested timeout.

PaginatedQuery

exception FLIR.conservator.paginated_query.ConcurrentQueryModificationException

Raised when a paginated query is modified in the middle of its execution.

class FLIR.conservator.paginated_query.PaginatedQuery(conservator, wrapping_type=None, query=None, base_operation=None, fields=None, page_size=25, unpack_field=None, reverse=False, total_unpack_field=None, **kwargs)

Enables pagination of any query with page and limit arguments.

Assume you want to iterate through all the Projects in a project search query. You could do something like the following:

>>> results = PaginatedQuery(conservator, query=Query.projects, search_text="ADAS")
>>> results = results.including("name")
>>> for project in results:
...     print(project.name)
Parameters:
  • conservator – The conservator instance to query.

  • wrapping_type – Not required. Included for backwards-compatibility.

  • query – The GraphQL Query to use.

  • base_operation – Not required. Included for backwards-compatibility.

  • fields – Fields to include in the returned objects.

  • page_size – The page size to use when submitting requests.

  • unpack_field – If specified, instead of directly returning the resulting object(s) from a query, returns the specified field. For instance, the query Query.dataset_frames_only is paginated, but returns a non-iteratable DatasetFrames object. The list of DatasetFrame is stored under the “dataset_frames” field. So if querying this, we’d want to set unpack_field to “dataset_frames”.

  • reverse – If True, query for results in reverse order. Intended for certain API calls that return results in a fixed order, e.g. dataset frames. The capability to grab frames in reverse order may make the collection of the newest items much more efficient. Requires the total_unpack_field to be set.

  • total_unpack_field – If reverse is true, the query fields need to include a field containing the total number of entries. Supply the field name to this parameter.

excluding(*fields)
Parameters:

fields – Fields to exclude in the results.

excluding_fields(*fields)
Parameters:

fields – Fields to exclude in the results.

filtered_by(func=<built-in function eq>, **kwargs)

Filter results by field value.

For example, to verify that

Parameters:
  • func – A function to use to compare an instance’s field with the provided value. If it returns False for any field, the instance will be skipped when returning results.

  • kwargs – A list of field name-value pairs to pass through the filter function for each instance.

first()

Returns the first result, or None if it doesn’t exist.

including(*fields)
Parameters:

fields – Fields to include in the results.

including_all_fields()

Include all non-excluded fields in the results.

including_fields(*fields)
Parameters:

fields – Fields to include in the results.

page_size(page_size)

Set the number of items to request in each query.

Typically, larger values will make the overall execution faster, but individual requests may be large and slow.

with_fields(fields)

Sets the query’s FieldsRequest to fields.

LocalDataset

exception FLIR.conservator.local_dataset.InvalidLocalDatasetPath(path)
class FLIR.conservator.local_dataset.LocalDataset(conservator, path)

Provides utilities for managing local datasets.

This replicates the functionality of CVC, and should now be the preferred method of working with local datasets.

Parameters:
  • conservator – A Conservator instance to use for uploading new images.

  • path – The path to the local dataset. This should point to the root directory (containing index.json and JSONL files).

add_local_changes(skip_validation=False)

Stages changes to index.json or *.jsonl files and associated_files for the next commit.

Parameters:

skip_validation – By default, index.json or *.jsonl are validated against a schema. If the schema is incorrect and you’re sure your source files are valid, you can pass True to skip the check. In this case, please also submit a PR so we can update the schema.

checkout(commit_hash, verbose=True)

Checks out a specific commit. This will delete any local changes in index.json or associated_files.

Parameters:

verbose – If False, run git commands with the -q option.

static clone(dataset, clone_path=None, verbose=True, max_retries=5, timeout=5)

Clone a dataset to a local path, returning a LocalDatasetOperations.

Parameters:
  • dataset – The dataset to clone. It must have a repository registered in Conservator.

  • clone_path – The path where the git repo should be created. If None, the dataset is cloned into a subdirectory of the current path, using the Dataset’s name.

  • verbose – If False, run git commands with the -q option.

  • max_retries – Retry this many times if the git clone command fails. This is intended to account for the race condition when a dataset has just been created using an API call and its repository is not immediately available.

  • timeout – Delay this many seconds between retries.

commit(message, verbose=True)

Commit added changes to the local git repo, with the given commit message.

Parameters:

verbose – If False, run git commands with the -q option.

download(include_raw=False, include_eight_bit=True, process_count=10, use_symlink=False, no_meter=False, tries=5)

Downloads the files listed in frames.jsonl or index.json of the local dataset.

Parameters:
  • include_raw – If True, download raw image data to rawData/.

  • include_eight_bit – If True, download eight-bit images to data/.

  • process_count – Number of concurrent download processes. Passing None will use os.cpu_count().

  • use_symlink – If True, use symbolic links instead of hardlinks when linking the cache and data.

  • no_meter – If ‘True’, don’t display file download progress meters.

  • tries – Specify a retry limit when recovering from connection errors.

get_dataset_info()

Get the dataset’s top-level info.

Collect the data from dataset.jsonl if present, else fall back to using the index.json file.

get_frames()

Get the frames array for the dataset.

Collect the data from frames.jsonl if present, else fall back to using the index.json file.

static get_image_info(path)

Returns image info to be added to a Dataset’s index.json, or None if there was an error.

This opens the path using PIL to verify it is a JPEG image, and get the dimensions.

get_index()

Returns the object in index.json.

static get_jsonl_data(jsonl_file)

Create a single JSON list object from a JSONL source file.

static get_max_frame_index(dataset_frames)

Returns the maximum frame index in a dataset’s frames.

This only counts frames uploaded directly to the dataset.

get_staged_images()

Returns the staged image paths from the staging file.

get_videos()

Get the videos array for the dataset.

Collect the data from videos.jsonl if present, else fall back to using the index.json file.

git_branch()

Return the git branch name for the dataset repository, if any.

git_status()

Parse the git branch and status for the dataset repository.

Returned table format:

added – contains a dictionary:

  • “staged” contains a list of new files that have been staged.

  • “working” contains a list of untracked files in the working

    directory.

modified – contains a dictionary:

  • “staged” contains a list of modified files that have been staged.

  • “working” contains a list of modified files in the working

    directory.

other – contains a list of dictionaries; for each dictionary in the

list:

  • “index” contains the index status character (e.g. ‘A’, ‘D’, etc).

  • “working” contains the working directory status character.

  • “source” contains the file name associated with the status.

  • A rename or copy status will also contain a “dest” key.

pull(verbose=True)

Pulls the latest repository state.

Parameters:

verbose – If False, run git commands with the -q option.

push_commits(verbose=True)

Push the git repo.

Parameters:

verbose – If False, run git commands with the -q option.

push_staged_images(copy_to_data=True, tries=5)

Push the staged images.

This reads the staged image paths, uploads them, adds metadata to index.json (or frames.jsonl if it exists), and deletes the staged image paths.

Parameters:
  • copy_to_data – If True, copy the staged images to the cache and link with the data directory. This produces the same result as downloading the images back from conservator (but without downloading).

  • tries – Specify a retry limit when recovering from HTTP 502 errors.

stage_local_images(image_paths)

Adds image paths to the staging file.

unstage_local_images(image_paths)

Remove image paths from the staging file.

validate_index(index_location=None)

Validates that the given index.json matches the expected JSON Schema.

validate_jsonl()

Validate jsonl files line-by-line

write_frames_to_jsonl(frames_list)

Rewrite frames.jsonl with the contents of frames_list.