feast.infra.offline_stores package

Subpackages

Submodules

feast.infra.offline_stores.bigquery module

class feast.infra.offline_stores.bigquery.BigQueryOfflineStore[source]

Bases: feast.infra.offline_stores.offline_store.OfflineStore

static get_historical_features(config: feast.repo_config.RepoConfig, feature_views: List[feast.feature_view.FeatureView], feature_refs: List[str], entity_df: Union[pandas.core.frame.DataFrame, str], registry: feast.infra.registry.base_registry.BaseRegistry, project: str, full_feature_names: bool = False) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Retrieves the point-in-time correct historical feature values for the specified entity rows.

Parameters
  • config – The config for the current feature store.

  • feature_views – A list containing all feature views that are referenced in the entity rows.

  • feature_refs – The features to be retrieved.

  • entity_df – A collection of rows containing all entity columns on which features need to be joined, as well as the timestamp column used for point-in-time joins. Either a pandas dataframe can be provided or a SQL query.

  • registry – The registry for the current feature store.

  • project – Feast project to which the feature views belong.

  • full_feature_names – If True, feature names will be prefixed with the corresponding feature view name, changing them from the format “feature” to “feature_view__feature” (e.g. “daily_transactions” changes to “customer_fv__daily_transactions”).

Returns

A RetrievalJob that can be executed to get the features.

static offline_write_batch(config: feast.repo_config.RepoConfig, feature_view: feast.feature_view.FeatureView, table: pyarrow.lib.Table, progress: Optional[Callable[[int], Any]])[source]

Writes the specified arrow table to the data source underlying the specified feature view.

Parameters
  • config – The config for the current feature store.

  • feature_view – The feature view whose batch source should be written.

  • table – The arrow table to write.

  • progress – Function to be called once a portion of the data has been written, used to show progress.

static pull_all_from_table_or_query(config: feast.repo_config.RepoConfig, data_source: feast.data_source.DataSource, join_key_columns: List[str], feature_name_columns: List[str], timestamp_field: str, start_date: datetime.datetime, end_date: datetime.datetime) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Extracts all the entity rows (i.e. the combination of join key columns, feature columns, and timestamp columns) from the specified data source that lie within the specified time range.

All of the column names should refer to columns that exist in the data source. In particular, any mapping of column names must have already happened.

Parameters
  • config – The config for the current feature store.

  • data_source – The data source from which the entity rows will be extracted.

  • join_key_columns – The columns of the join keys.

  • feature_name_columns – The columns of the features.

  • timestamp_field – The timestamp column.

  • start_date – The start of the time range.

  • end_date – The end of the time range.

Returns

A RetrievalJob that can be executed to get the entity rows.

static pull_latest_from_table_or_query(config: feast.repo_config.RepoConfig, data_source: feast.data_source.DataSource, join_key_columns: List[str], feature_name_columns: List[str], timestamp_field: str, created_timestamp_column: Optional[str], start_date: datetime.datetime, end_date: datetime.datetime) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Extracts the latest entity rows (i.e. the combination of join key columns, feature columns, and timestamp columns) from the specified data source that lie within the specified time range.

All of the column names should refer to columns that exist in the data source. In particular, any mapping of column names must have already happened.

Parameters
  • config – The config for the current feature store.

  • data_source – The data source from which the entity rows will be extracted.

  • join_key_columns – The columns of the join keys.

  • feature_name_columns – The columns of the features.

  • timestamp_field – The timestamp column, used to determine which rows are the most recent.

  • created_timestamp_column – The column indicating when the row was created, used to break ties.

  • start_date – The start of the time range.

  • end_date – The end of the time range.

Returns

A RetrievalJob that can be executed to get the entity rows.

static write_logged_features(config: feast.repo_config.RepoConfig, data: Union[pyarrow.lib.Table, pathlib.Path], source: feast.feature_logging.LoggingSource, logging_config: feast.feature_logging.LoggingConfig, registry: feast.infra.registry.base_registry.BaseRegistry)[source]

Writes logged features to a specified destination in the offline store.

If the specified destination exists, data will be appended; otherwise, the destination will be created and data will be added. Thus this function can be called repeatedly with the same destination to flush logs in chunks.

Parameters
  • config – The config for the current feature store.

  • data – An arrow table or a path to parquet directory that contains the logs to write.

  • source – The logging source that provides a schema and some additional metadata.

  • logging_config – A LoggingConfig object that determines where the logs will be written.

  • registry – The registry for the current feature store.

class feast.infra.offline_stores.bigquery.BigQueryOfflineStoreConfig(*, type: Literal['bigquery'] = 'bigquery', dataset: pydantic.types.StrictStr = 'feast', project_id: Optional[pydantic.types.StrictStr] = None, billing_project_id: Optional[pydantic.types.StrictStr] = None, location: Optional[pydantic.types.StrictStr] = None, gcs_staging_location: Optional[str] = None)[source]

Bases: feast.repo_config.FeastConfigBaseModel

Offline store config for GCP BigQuery

billing_project_id: Optional[pydantic.types.StrictStr]

(optional) GCP project name used to run the bigquery jobs at

dataset: pydantic.types.StrictStr

(optional) BigQuery Dataset name for temporary tables

gcs_staging_location: Optional[str]

(optional) GCS location used for offloading BigQuery results as parquet files.

location: Optional[pydantic.types.StrictStr]

(optional) GCP location name used for the BigQuery offline store. Examples of location names include US, EU, us-central1, us-west4. If a location is not specified, the location defaults to the US multi-regional location. For more information on BigQuery data locations see: https://cloud.google.com/bigquery/docs/locations

project_id: Optional[pydantic.types.StrictStr]

(optional) GCP project name used for the BigQuery offline store

classmethod project_id_exists(v, values, **kwargs)[source]
type: Literal['bigquery']

Offline store type selector

class feast.infra.offline_stores.bigquery.BigQueryRetrievalJob(query: Union[str, Callable[[], AbstractContextManager[str]]], client: google.cloud.bigquery.client.Client, config: feast.repo_config.RepoConfig, full_feature_names: bool, on_demand_feature_views: Optional[List[feast.on_demand_feature_view.OnDemandFeatureView]] = None, metadata: Optional[feast.infra.offline_stores.offline_store.RetrievalMetadata] = None)[source]

Bases: feast.infra.offline_stores.offline_store.RetrievalJob

property full_feature_names: bool

Returns True if full feature names should be applied to the results of the query.

property metadata: Optional[feast.infra.offline_stores.offline_store.RetrievalMetadata]

Returns metadata about the retrieval job.

property on_demand_feature_views: List[feast.on_demand_feature_view.OnDemandFeatureView]

Returns a list containing all the on demand feature views to be handled.

persist(storage: feast.saved_dataset.SavedDatasetStorage, allow_overwrite: bool = False)[source]

Synchronously executes the underlying query and persists the result in the same offline store at the specified destination.

Parameters
  • storage – The saved dataset storage object specifying where the result should be persisted.

  • allow_overwrite – If True, a pre-existing location (e.g. table or file) can be overwritten. Currently not all individual offline store implementations make use of this parameter.

supports_remote_storage_export() bool[source]

Returns True if the RetrievalJob supports to_remote_storage.

to_bigquery(job_config: Optional[google.cloud.bigquery.job.query.QueryJobConfig] = None, timeout: int = 1800, retry_cadence: int = 10) str[source]

Synchronously executes the underlying query and exports the result to a BigQuery table. The underlying BigQuery job runs for a limited amount of time (the default is 30 minutes).

Parameters
  • job_config (optional) – A bigquery.QueryJobConfig to specify options like the destination table, dry run, etc.

  • timeout (optional) – The time limit of the BigQuery job in seconds. Defaults to 30 minutes.

  • retry_cadence (optional) – The number of seconds for setting how long the job should checked for completion.

Returns

Returns the destination table name or None if job_config.dry_run is True.

to_remote_storage() List[str][source]

Synchronously executes the underlying query and exports the results to remote storage (e.g. S3 or GCS).

Implementations of this method should export the results as multiple parquet files, each file sized appropriately depending on how much data is being returned by the retrieval job.

Returns

A list of parquet file paths in remote storage.

to_sql() str[source]

Returns the underlying SQL query.

feast.infra.offline_stores.bigquery.arrow_schema_to_bq_schema(arrow_schema: pyarrow.lib.Schema) List[google.cloud.bigquery.schema.SchemaField][source]
feast.infra.offline_stores.bigquery.block_until_done(client: google.cloud.bigquery.client.Client, bq_job: Union[google.cloud.bigquery.job.query.QueryJob, google.cloud.bigquery.job.load.LoadJob], timeout: int = 1800, retry_cadence: float = 1)[source]

Waits for bq_job to finish running, up to a maximum amount of time specified by the timeout parameter (defaulting to 30 minutes).

Parameters
  • client – A bigquery.client.Client to monitor the bq_job.

  • bq_job – The bigquery.job.QueryJob that blocks until done runnning.

  • timeout – An optional number of seconds for setting the time limit of the job.

  • retry_cadence – An optional number of seconds for setting how long the job should checked for completion.

Raises
  • BigQueryJobStillRunning exception if the function has blocked longer than 30 minutes.

  • BigQueryJobCancelled exception to signify when that the job has been cancelled (i.e. from timeout or KeyboardInterrupt)

feast.infra.offline_stores.bigquery.get_http_client_info()[source]

feast.infra.offline_stores.bigquery_source module

class feast.infra.offline_stores.bigquery_source.BigQueryLoggingDestination(*, table_ref)[source]

Bases: feast.feature_logging.LoggingDestination

classmethod from_proto(config_proto: feast.core.FeatureService_pb2.LoggingConfig) feast.feature_logging.LoggingDestination[source]
table: str
to_data_source() feast.data_source.DataSource[source]

Convert this object into a data source to read logs from an offline store.

to_proto() feast.core.FeatureService_pb2.LoggingConfig[source]
class feast.infra.offline_stores.bigquery_source.BigQueryOptions(table: Optional[str], query: Optional[str])[source]

Bases: object

Configuration options for a BigQuery data source.

classmethod from_proto(bigquery_options_proto: feast.core.DataSource_pb2.BigQueryOptions)[source]

Creates a BigQueryOptions from a protobuf representation of a BigQuery option

Parameters

bigquery_options_proto – A protobuf representation of a DataSource

Returns

Returns a BigQueryOptions object based on the bigquery_options protobuf

to_proto() feast.core.DataSource_pb2.BigQueryOptions[source]

Converts an BigQueryOptionsProto object to its protobuf representation.

Returns

BigQueryOptionsProto protobuf

class feast.infra.offline_stores.bigquery_source.BigQuerySource(*, name: Optional[str] = None, timestamp_field: Optional[str] = None, table: Optional[str] = None, created_timestamp_column: Optional[str] = '', field_mapping: Optional[Dict[str, str]] = None, query: Optional[str] = None, description: Optional[str] = '', tags: Optional[Dict[str, str]] = None, owner: Optional[str] = '')[source]

Bases: feast.data_source.DataSource

created_timestamp_column: str
date_partition_column: str
description: str
field_mapping: Dict[str, str]
static from_proto(data_source: feast.core.DataSource_pb2.DataSource)[source]

Converts data source config in protobuf spec to a DataSource class object.

Parameters

data_source – A protobuf representation of a DataSource.

Returns

A DataSource class object.

Raises

ValueError – The type of DataSource could not be identified.

get_table_column_names_and_types(config: feast.repo_config.RepoConfig) Iterable[Tuple[str, str]][source]

Returns the list of column names and raw column types.

Parameters

config – Configuration object used to configure a feature store.

get_table_query_string() str[source]

Returns a string that can directly be used to reference this table in SQL

name: str
owner: str
property query
static source_datatype_to_feast_value_type() Callable[[str], feast.value_type.ValueType][source]

Returns the callable method that returns Feast type given the raw column type.

property table
tags: Dict[str, str]
timestamp_field: str
to_proto() feast.core.DataSource_pb2.DataSource[source]

Converts a DataSourceProto object to its protobuf representation.

validate(config: feast.repo_config.RepoConfig)[source]

Validates the underlying data source.

Parameters

config – Configuration object used to configure a feature store.

class feast.infra.offline_stores.bigquery_source.SavedDatasetBigQueryStorage(table: str)[source]

Bases: feast.saved_dataset.SavedDatasetStorage

bigquery_options: feast.infra.offline_stores.bigquery_source.BigQueryOptions
static from_proto(storage_proto: feast.core.SavedDataset_pb2.SavedDatasetStorage) feast.saved_dataset.SavedDatasetStorage[source]
to_data_source() feast.data_source.DataSource[source]
to_proto() feast.core.SavedDataset_pb2.SavedDatasetStorage[source]

feast.infra.offline_stores.file module

class feast.infra.offline_stores.file.FileOfflineStore[source]

Bases: feast.infra.offline_stores.offline_store.OfflineStore

static get_historical_features(config: feast.repo_config.RepoConfig, feature_views: List[feast.feature_view.FeatureView], feature_refs: List[str], entity_df: Union[pandas.core.frame.DataFrame, str], registry: feast.infra.registry.base_registry.BaseRegistry, project: str, full_feature_names: bool = False) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Retrieves the point-in-time correct historical feature values for the specified entity rows.

Parameters
  • config – The config for the current feature store.

  • feature_views – A list containing all feature views that are referenced in the entity rows.

  • feature_refs – The features to be retrieved.

  • entity_df – A collection of rows containing all entity columns on which features need to be joined, as well as the timestamp column used for point-in-time joins. Either a pandas dataframe can be provided or a SQL query.

  • registry – The registry for the current feature store.

  • project – Feast project to which the feature views belong.

  • full_feature_names – If True, feature names will be prefixed with the corresponding feature view name, changing them from the format “feature” to “feature_view__feature” (e.g. “daily_transactions” changes to “customer_fv__daily_transactions”).

Returns

A RetrievalJob that can be executed to get the features.

static offline_write_batch(config: feast.repo_config.RepoConfig, feature_view: feast.feature_view.FeatureView, table: pyarrow.lib.Table, progress: Optional[Callable[[int], Any]])[source]

Writes the specified arrow table to the data source underlying the specified feature view.

Parameters
  • config – The config for the current feature store.

  • feature_view – The feature view whose batch source should be written.

  • table – The arrow table to write.

  • progress – Function to be called once a portion of the data has been written, used to show progress.

static pull_all_from_table_or_query(config: feast.repo_config.RepoConfig, data_source: feast.data_source.DataSource, join_key_columns: List[str], feature_name_columns: List[str], timestamp_field: str, start_date: datetime.datetime, end_date: datetime.datetime) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Extracts all the entity rows (i.e. the combination of join key columns, feature columns, and timestamp columns) from the specified data source that lie within the specified time range.

All of the column names should refer to columns that exist in the data source. In particular, any mapping of column names must have already happened.

Parameters
  • config – The config for the current feature store.

  • data_source – The data source from which the entity rows will be extracted.

  • join_key_columns – The columns of the join keys.

  • feature_name_columns – The columns of the features.

  • timestamp_field – The timestamp column.

  • start_date – The start of the time range.

  • end_date – The end of the time range.

Returns

A RetrievalJob that can be executed to get the entity rows.

static pull_latest_from_table_or_query(config: feast.repo_config.RepoConfig, data_source: feast.data_source.DataSource, join_key_columns: List[str], feature_name_columns: List[str], timestamp_field: str, created_timestamp_column: Optional[str], start_date: datetime.datetime, end_date: datetime.datetime) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Extracts the latest entity rows (i.e. the combination of join key columns, feature columns, and timestamp columns) from the specified data source that lie within the specified time range.

All of the column names should refer to columns that exist in the data source. In particular, any mapping of column names must have already happened.

Parameters
  • config – The config for the current feature store.

  • data_source – The data source from which the entity rows will be extracted.

  • join_key_columns – The columns of the join keys.

  • feature_name_columns – The columns of the features.

  • timestamp_field – The timestamp column, used to determine which rows are the most recent.

  • created_timestamp_column – The column indicating when the row was created, used to break ties.

  • start_date – The start of the time range.

  • end_date – The end of the time range.

Returns

A RetrievalJob that can be executed to get the entity rows.

static write_logged_features(config: feast.repo_config.RepoConfig, data: Union[pyarrow.lib.Table, pathlib.Path], source: feast.feature_logging.LoggingSource, logging_config: feast.feature_logging.LoggingConfig, registry: feast.infra.registry.base_registry.BaseRegistry)[source]

Writes logged features to a specified destination in the offline store.

If the specified destination exists, data will be appended; otherwise, the destination will be created and data will be added. Thus this function can be called repeatedly with the same destination to flush logs in chunks.

Parameters
  • config – The config for the current feature store.

  • data – An arrow table or a path to parquet directory that contains the logs to write.

  • source – The logging source that provides a schema and some additional metadata.

  • logging_config – A LoggingConfig object that determines where the logs will be written.

  • registry – The registry for the current feature store.

class feast.infra.offline_stores.file.FileOfflineStoreConfig(*, type: Literal['file'] = 'file')[source]

Bases: feast.repo_config.FeastConfigBaseModel

Offline store config for local (file-based) store

type: Literal['file']

Offline store type selector

class feast.infra.offline_stores.file.FileRetrievalJob(evaluation_function: Callable, full_feature_names: bool, on_demand_feature_views: Optional[List[feast.on_demand_feature_view.OnDemandFeatureView]] = None, metadata: Optional[feast.infra.offline_stores.offline_store.RetrievalMetadata] = None)[source]

Bases: feast.infra.offline_stores.offline_store.RetrievalJob

property full_feature_names: bool

Returns True if full feature names should be applied to the results of the query.

property metadata: Optional[feast.infra.offline_stores.offline_store.RetrievalMetadata]

Returns metadata about the retrieval job.

property on_demand_feature_views: List[feast.on_demand_feature_view.OnDemandFeatureView]

Returns a list containing all the on demand feature views to be handled.

persist(storage: feast.saved_dataset.SavedDatasetStorage, allow_overwrite: bool = False)[source]

Synchronously executes the underlying query and persists the result in the same offline store at the specified destination.

Parameters
  • storage – The saved dataset storage object specifying where the result should be persisted.

  • allow_overwrite – If True, a pre-existing location (e.g. table or file) can be overwritten. Currently not all individual offline store implementations make use of this parameter.

supports_remote_storage_export() bool[source]

Returns True if the RetrievalJob supports to_remote_storage.

feast.infra.offline_stores.file_source module

class feast.infra.offline_stores.file_source.FileLoggingDestination(*, path: str, s3_endpoint_override='', partition_by: Optional[List[str]] = None)[source]

Bases: feast.feature_logging.LoggingDestination

classmethod from_proto(config_proto: feast.core.FeatureService_pb2.LoggingConfig) feast.feature_logging.LoggingDestination[source]
partition_by: Optional[List[str]]
path: str
s3_endpoint_override: str
to_data_source() feast.data_source.DataSource[source]

Convert this object into a data source to read logs from an offline store.

to_proto() feast.core.FeatureService_pb2.LoggingConfig[source]
class feast.infra.offline_stores.file_source.FileOptions(uri: str, file_format: Optional[feast.data_format.FileFormat], s3_endpoint_override: Optional[str])[source]

Bases: object

Configuration options for a file data source.

uri

File source url, e.g. s3:// or local file.

Type

str

s3_endpoint_override

Custom s3 endpoint (used only with s3 uri).

Type

str

file_format

File source format, e.g. parquet.

Type

Optional[feast.data_format.FileFormat]

file_format: Optional[feast.data_format.FileFormat]
classmethod from_proto(file_options_proto: feast.core.DataSource_pb2.FileOptions)[source]

Creates a FileOptions from a protobuf representation of a file option

Parameters

file_options_proto – a protobuf representation of a datasource

Returns

Returns a FileOptions object based on the file_options protobuf

s3_endpoint_override: str
to_proto() feast.core.DataSource_pb2.FileOptions[source]

Converts an FileOptionsProto object to its protobuf representation.

Returns

FileOptionsProto protobuf

uri: str
class feast.infra.offline_stores.file_source.FileSource(*, path: str, name: Optional[str] = '', event_timestamp_column: Optional[str] = '', file_format: Optional[feast.data_format.FileFormat] = None, created_timestamp_column: Optional[str] = '', field_mapping: Optional[Dict[str, str]] = None, s3_endpoint_override: Optional[str] = None, description: Optional[str] = '', tags: Optional[Dict[str, str]] = None, owner: Optional[str] = '', timestamp_field: Optional[str] = '')[source]

Bases: feast.data_source.DataSource

static create_filesystem_and_path(path: str, s3_endpoint_override: str) Tuple[Optional[pyarrow._fs.FileSystem], str][source]
created_timestamp_column: str
date_partition_column: str
description: str
field_mapping: Dict[str, str]
property file_format: Optional[feast.data_format.FileFormat]

Returns the file format of this file data source.

static from_proto(data_source: feast.core.DataSource_pb2.DataSource)[source]

Converts data source config in protobuf spec to a DataSource class object.

Parameters

data_source – A protobuf representation of a DataSource.

Returns

A DataSource class object.

Raises

ValueError – The type of DataSource could not be identified.

get_table_column_names_and_types(config: feast.repo_config.RepoConfig) Iterable[Tuple[str, str]][source]

Returns the list of column names and raw column types.

Parameters

config – Configuration object used to configure a feature store.

get_table_query_string() str[source]

Returns a string that can directly be used to reference this table in SQL.

name: str
owner: str
property path: str

Returns the path of this file data source.

property s3_endpoint_override: Optional[str]

Returns the s3 endpoint override of this file data source.

static source_datatype_to_feast_value_type() Callable[[str], feast.value_type.ValueType][source]

Returns the callable method that returns Feast type given the raw column type.

tags: Dict[str, str]
timestamp_field: str
to_proto() feast.core.DataSource_pb2.DataSource[source]

Converts a DataSourceProto object to its protobuf representation.

validate(config: feast.repo_config.RepoConfig)[source]

Validates the underlying data source.

Parameters

config – Configuration object used to configure a feature store.

class feast.infra.offline_stores.file_source.SavedDatasetFileStorage(path: str, file_format: feast.data_format.FileFormat = <feast.data_format.ParquetFormat object>, s3_endpoint_override: Optional[str] = None)[source]

Bases: feast.saved_dataset.SavedDatasetStorage

file_options: feast.infra.offline_stores.file_source.FileOptions
static from_data_source(data_source: feast.data_source.DataSource) feast.saved_dataset.SavedDatasetStorage[source]
static from_proto(storage_proto: feast.core.SavedDataset_pb2.SavedDatasetStorage) feast.saved_dataset.SavedDatasetStorage[source]
to_data_source() feast.data_source.DataSource[source]
to_proto() feast.core.SavedDataset_pb2.SavedDatasetStorage[source]

feast.infra.offline_stores.offline_store module

class feast.infra.offline_stores.offline_store.OfflineStore[source]

Bases: abc.ABC

An offline store defines the interface that Feast uses to interact with the storage and compute system that handles offline features.

Each offline store implementation is designed to work only with the corresponding data source. For example, the SnowflakeOfflineStore can handle SnowflakeSources but not FileSources.

abstract static get_historical_features(config: feast.repo_config.RepoConfig, feature_views: List[feast.feature_view.FeatureView], feature_refs: List[str], entity_df: Union[pandas.core.frame.DataFrame, str], registry: feast.infra.registry.base_registry.BaseRegistry, project: str, full_feature_names: bool = False) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Retrieves the point-in-time correct historical feature values for the specified entity rows.

Parameters
  • config – The config for the current feature store.

  • feature_views – A list containing all feature views that are referenced in the entity rows.

  • feature_refs – The features to be retrieved.

  • entity_df – A collection of rows containing all entity columns on which features need to be joined, as well as the timestamp column used for point-in-time joins. Either a pandas dataframe can be provided or a SQL query.

  • registry – The registry for the current feature store.

  • project – Feast project to which the feature views belong.

  • full_feature_names – If True, feature names will be prefixed with the corresponding feature view name, changing them from the format “feature” to “feature_view__feature” (e.g. “daily_transactions” changes to “customer_fv__daily_transactions”).

Returns

A RetrievalJob that can be executed to get the features.

static offline_write_batch(config: feast.repo_config.RepoConfig, feature_view: feast.feature_view.FeatureView, table: pyarrow.lib.Table, progress: Optional[Callable[[int], Any]])[source]

Writes the specified arrow table to the data source underlying the specified feature view.

Parameters
  • config – The config for the current feature store.

  • feature_view – The feature view whose batch source should be written.

  • table – The arrow table to write.

  • progress – Function to be called once a portion of the data has been written, used to show progress.

abstract static pull_all_from_table_or_query(config: feast.repo_config.RepoConfig, data_source: feast.data_source.DataSource, join_key_columns: List[str], feature_name_columns: List[str], timestamp_field: str, start_date: datetime.datetime, end_date: datetime.datetime) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Extracts all the entity rows (i.e. the combination of join key columns, feature columns, and timestamp columns) from the specified data source that lie within the specified time range.

All of the column names should refer to columns that exist in the data source. In particular, any mapping of column names must have already happened.

Parameters
  • config – The config for the current feature store.

  • data_source – The data source from which the entity rows will be extracted.

  • join_key_columns – The columns of the join keys.

  • feature_name_columns – The columns of the features.

  • timestamp_field – The timestamp column.

  • start_date – The start of the time range.

  • end_date – The end of the time range.

Returns

A RetrievalJob that can be executed to get the entity rows.

abstract static pull_latest_from_table_or_query(config: feast.repo_config.RepoConfig, data_source: feast.data_source.DataSource, join_key_columns: List[str], feature_name_columns: List[str], timestamp_field: str, created_timestamp_column: Optional[str], start_date: datetime.datetime, end_date: datetime.datetime) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Extracts the latest entity rows (i.e. the combination of join key columns, feature columns, and timestamp columns) from the specified data source that lie within the specified time range.

All of the column names should refer to columns that exist in the data source. In particular, any mapping of column names must have already happened.

Parameters
  • config – The config for the current feature store.

  • data_source – The data source from which the entity rows will be extracted.

  • join_key_columns – The columns of the join keys.

  • feature_name_columns – The columns of the features.

  • timestamp_field – The timestamp column, used to determine which rows are the most recent.

  • created_timestamp_column – The column indicating when the row was created, used to break ties.

  • start_date – The start of the time range.

  • end_date – The end of the time range.

Returns

A RetrievalJob that can be executed to get the entity rows.

static write_logged_features(config: feast.repo_config.RepoConfig, data: Union[pyarrow.lib.Table, pathlib.Path], source: feast.feature_logging.LoggingSource, logging_config: feast.feature_logging.LoggingConfig, registry: feast.infra.registry.base_registry.BaseRegistry)[source]

Writes logged features to a specified destination in the offline store.

If the specified destination exists, data will be appended; otherwise, the destination will be created and data will be added. Thus this function can be called repeatedly with the same destination to flush logs in chunks.

Parameters
  • config – The config for the current feature store.

  • data – An arrow table or a path to parquet directory that contains the logs to write.

  • source – The logging source that provides a schema and some additional metadata.

  • logging_config – A LoggingConfig object that determines where the logs will be written.

  • registry – The registry for the current feature store.

class feast.infra.offline_stores.offline_store.RetrievalJob[source]

Bases: abc.ABC

A RetrievalJob manages the execution of a query to retrieve data from the offline store.

abstract property full_feature_names: bool

Returns True if full feature names should be applied to the results of the query.

abstract property metadata: Optional[feast.infra.offline_stores.offline_store.RetrievalMetadata]

Returns metadata about the retrieval job.

abstract property on_demand_feature_views: List[feast.on_demand_feature_view.OnDemandFeatureView]

Returns a list containing all the on demand feature views to be handled.

abstract persist(storage: feast.saved_dataset.SavedDatasetStorage, allow_overwrite: bool = False)[source]

Synchronously executes the underlying query and persists the result in the same offline store at the specified destination.

Parameters
  • storage – The saved dataset storage object specifying where the result should be persisted.

  • allow_overwrite – If True, a pre-existing location (e.g. table or file) can be overwritten. Currently not all individual offline store implementations make use of this parameter.

supports_remote_storage_export() bool[source]

Returns True if the RetrievalJob supports to_remote_storage.

to_arrow(validation_reference: Optional[ValidationReference] = None) pyarrow.lib.Table[source]

Synchronously executes the underlying query and returns the result as an arrow table.

On demand transformations will be executed. If a validation reference is provided, the dataframe will be validated.

Parameters

validation_reference (optional) – The validation to apply against the retrieved dataframe.

to_df(validation_reference: Optional[ValidationReference] = None) pandas.core.frame.DataFrame[source]

Synchronously executes the underlying query and returns the result as a pandas dataframe.

On demand transformations will be executed. If a validation reference is provided, the dataframe will be validated.

Parameters

validation_reference (optional) – The validation to apply against the retrieved dataframe.

to_remote_storage() List[str][source]

Synchronously executes the underlying query and exports the results to remote storage (e.g. S3 or GCS).

Implementations of this method should export the results as multiple parquet files, each file sized appropriately depending on how much data is being returned by the retrieval job.

Returns

A list of parquet file paths in remote storage.

to_sql() str[source]

Return RetrievalJob generated SQL statement if applicable.

class feast.infra.offline_stores.offline_store.RetrievalMetadata(features: List[str], keys: List[str], min_event_timestamp: Optional[datetime.datetime] = None, max_event_timestamp: Optional[datetime.datetime] = None)[source]

Bases: object

features: List[str]
keys: List[str]
max_event_timestamp: Optional[datetime.datetime]
min_event_timestamp: Optional[datetime.datetime]

feast.infra.offline_stores.offline_utils module

class feast.infra.offline_stores.offline_utils.FeatureViewQueryContext(name: str, ttl: int, entities: List[str], features: List[str], field_mapping: Dict[str, str], timestamp_field: str, created_timestamp_column: Optional[str], table_subquery: str, entity_selections: List[str], min_event_timestamp: Optional[str], max_event_timestamp: str, date_partition_column: Optional[str])[source]

Bases: object

Context object used to template a BigQuery and Redshift point-in-time SQL query

created_timestamp_column: Optional[str]
date_partition_column: Optional[str]
entities: List[str]
entity_selections: List[str]
features: List[str]
field_mapping: Dict[str, str]
max_event_timestamp: str
min_event_timestamp: Optional[str]
name: str
table_subquery: str
timestamp_field: str
ttl: int
feast.infra.offline_stores.offline_utils.assert_expected_columns_in_entity_df(entity_schema: Dict[str, numpy.dtype], join_keys: Set[str], entity_df_event_timestamp_col: str)[source]
feast.infra.offline_stores.offline_utils.build_point_in_time_query(feature_view_query_contexts: List[feast.infra.offline_stores.offline_utils.FeatureViewQueryContext], left_table_query_string: str, entity_df_event_timestamp_col: str, entity_df_columns: KeysView[str], query_template: str, full_feature_names: bool = False) str[source]

Build point-in-time query between each feature view table and the entity dataframe for Bigquery and Redshift

feast.infra.offline_stores.offline_utils.get_entity_df_timestamp_bounds(entity_df: pandas.core.frame.DataFrame, event_timestamp_col: str) Tuple[pandas._libs.tslibs.timestamps.Timestamp, pandas._libs.tslibs.timestamps.Timestamp][source]
feast.infra.offline_stores.offline_utils.get_expected_join_keys(project: str, feature_views: List[feast.feature_view.FeatureView], registry: feast.infra.registry.base_registry.BaseRegistry) Set[str][source]
feast.infra.offline_stores.offline_utils.get_feature_view_query_context(feature_refs: List[str], feature_views: List[feast.feature_view.FeatureView], registry: feast.infra.registry.base_registry.BaseRegistry, project: str, entity_df_timestamp_range: Tuple[datetime.datetime, datetime.datetime]) List[feast.infra.offline_stores.offline_utils.FeatureViewQueryContext][source]

Build a query context containing all information required to template a BigQuery and Redshift point-in-time SQL query

feast.infra.offline_stores.offline_utils.get_offline_store_from_config(offline_store_config: Any) feast.infra.offline_stores.offline_store.OfflineStore[source]

Creates an offline store corresponding to the given offline store config.

feast.infra.offline_stores.offline_utils.get_pyarrow_schema_from_batch_source(config: feast.repo_config.RepoConfig, batch_source: feast.data_source.DataSource) Tuple[pyarrow.lib.Schema, List[str]][source]

Returns the pyarrow schema and column names for the given batch source.

feast.infra.offline_stores.offline_utils.get_temp_entity_table_name() str[source]

Returns a random table name for uploading the entity dataframe

feast.infra.offline_stores.offline_utils.infer_event_timestamp_from_entity_df(entity_schema: Dict[str, numpy.dtype]) str[source]

feast.infra.offline_stores.redshift module

class feast.infra.offline_stores.redshift.RedshiftOfflineStore[source]

Bases: feast.infra.offline_stores.offline_store.OfflineStore

static get_historical_features(config: feast.repo_config.RepoConfig, feature_views: List[feast.feature_view.FeatureView], feature_refs: List[str], entity_df: Union[pandas.core.frame.DataFrame, str], registry: feast.infra.registry.base_registry.BaseRegistry, project: str, full_feature_names: bool = False) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Retrieves the point-in-time correct historical feature values for the specified entity rows.

Parameters
  • config – The config for the current feature store.

  • feature_views – A list containing all feature views that are referenced in the entity rows.

  • feature_refs – The features to be retrieved.

  • entity_df – A collection of rows containing all entity columns on which features need to be joined, as well as the timestamp column used for point-in-time joins. Either a pandas dataframe can be provided or a SQL query.

  • registry – The registry for the current feature store.

  • project – Feast project to which the feature views belong.

  • full_feature_names – If True, feature names will be prefixed with the corresponding feature view name, changing them from the format “feature” to “feature_view__feature” (e.g. “daily_transactions” changes to “customer_fv__daily_transactions”).

Returns

A RetrievalJob that can be executed to get the features.

static offline_write_batch(config: feast.repo_config.RepoConfig, feature_view: feast.feature_view.FeatureView, table: pyarrow.lib.Table, progress: Optional[Callable[[int], Any]])[source]

Writes the specified arrow table to the data source underlying the specified feature view.

Parameters
  • config – The config for the current feature store.

  • feature_view – The feature view whose batch source should be written.

  • table – The arrow table to write.

  • progress – Function to be called once a portion of the data has been written, used to show progress.

static pull_all_from_table_or_query(config: feast.repo_config.RepoConfig, data_source: feast.data_source.DataSource, join_key_columns: List[str], feature_name_columns: List[str], timestamp_field: str, start_date: datetime.datetime, end_date: datetime.datetime) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Extracts all the entity rows (i.e. the combination of join key columns, feature columns, and timestamp columns) from the specified data source that lie within the specified time range.

All of the column names should refer to columns that exist in the data source. In particular, any mapping of column names must have already happened.

Parameters
  • config – The config for the current feature store.

  • data_source – The data source from which the entity rows will be extracted.

  • join_key_columns – The columns of the join keys.

  • feature_name_columns – The columns of the features.

  • timestamp_field – The timestamp column.

  • start_date – The start of the time range.

  • end_date – The end of the time range.

Returns

A RetrievalJob that can be executed to get the entity rows.

static pull_latest_from_table_or_query(config: feast.repo_config.RepoConfig, data_source: feast.data_source.DataSource, join_key_columns: List[str], feature_name_columns: List[str], timestamp_field: str, created_timestamp_column: Optional[str], start_date: datetime.datetime, end_date: datetime.datetime) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Extracts the latest entity rows (i.e. the combination of join key columns, feature columns, and timestamp columns) from the specified data source that lie within the specified time range.

All of the column names should refer to columns that exist in the data source. In particular, any mapping of column names must have already happened.

Parameters
  • config – The config for the current feature store.

  • data_source – The data source from which the entity rows will be extracted.

  • join_key_columns – The columns of the join keys.

  • feature_name_columns – The columns of the features.

  • timestamp_field – The timestamp column, used to determine which rows are the most recent.

  • created_timestamp_column – The column indicating when the row was created, used to break ties.

  • start_date – The start of the time range.

  • end_date – The end of the time range.

Returns

A RetrievalJob that can be executed to get the entity rows.

static write_logged_features(config: feast.repo_config.RepoConfig, data: Union[pyarrow.lib.Table, pathlib.Path], source: feast.feature_logging.LoggingSource, logging_config: feast.feature_logging.LoggingConfig, registry: feast.infra.registry.base_registry.BaseRegistry)[source]

Writes logged features to a specified destination in the offline store.

If the specified destination exists, data will be appended; otherwise, the destination will be created and data will be added. Thus this function can be called repeatedly with the same destination to flush logs in chunks.

Parameters
  • config – The config for the current feature store.

  • data – An arrow table or a path to parquet directory that contains the logs to write.

  • source – The logging source that provides a schema and some additional metadata.

  • logging_config – A LoggingConfig object that determines where the logs will be written.

  • registry – The registry for the current feature store.

class feast.infra.offline_stores.redshift.RedshiftOfflineStoreConfig(*, type: Literal['redshift'] = 'redshift', cluster_id: pydantic.types.StrictStr, region: pydantic.types.StrictStr, user: pydantic.types.StrictStr, database: pydantic.types.StrictStr, s3_staging_location: pydantic.types.StrictStr, iam_role: pydantic.types.StrictStr)[source]

Bases: feast.repo_config.FeastConfigBaseModel

Offline store config for AWS Redshift

cluster_id: pydantic.types.StrictStr

Redshift cluster identifier

database: pydantic.types.StrictStr

Redshift database name

iam_role: pydantic.types.StrictStr

IAM Role for Redshift, granting it access to S3

region: pydantic.types.StrictStr

Redshift cluster’s AWS region

s3_staging_location: pydantic.types.StrictStr

S3 path for importing & exporting data to Redshift

type: Literal['redshift']

Offline store type selector

user: pydantic.types.StrictStr

Redshift user name

class feast.infra.offline_stores.redshift.RedshiftRetrievalJob(query: Union[str, Callable[[], AbstractContextManager[str]]], redshift_client, s3_resource, config: feast.repo_config.RepoConfig, full_feature_names: bool, on_demand_feature_views: Optional[List[feast.on_demand_feature_view.OnDemandFeatureView]] = None, metadata: Optional[feast.infra.offline_stores.offline_store.RetrievalMetadata] = None)[source]

Bases: feast.infra.offline_stores.offline_store.RetrievalJob

property full_feature_names: bool

Returns True if full feature names should be applied to the results of the query.

property metadata: Optional[feast.infra.offline_stores.offline_store.RetrievalMetadata]

Returns metadata about the retrieval job.

property on_demand_feature_views: List[feast.on_demand_feature_view.OnDemandFeatureView]

Returns a list containing all the on demand feature views to be handled.

persist(storage: feast.saved_dataset.SavedDatasetStorage, allow_overwrite: bool = False)[source]

Synchronously executes the underlying query and persists the result in the same offline store at the specified destination.

Parameters
  • storage – The saved dataset storage object specifying where the result should be persisted.

  • allow_overwrite – If True, a pre-existing location (e.g. table or file) can be overwritten. Currently not all individual offline store implementations make use of this parameter.

supports_remote_storage_export() bool[source]

Returns True if the RetrievalJob supports to_remote_storage.

to_redshift(table_name: str) None[source]

Save dataset as a new Redshift table

to_remote_storage() List[str][source]

Synchronously executes the underlying query and exports the results to remote storage (e.g. S3 or GCS).

Implementations of this method should export the results as multiple parquet files, each file sized appropriately depending on how much data is being returned by the retrieval job.

Returns

A list of parquet file paths in remote storage.

to_s3() str[source]

Export dataset to S3 in Parquet format and return path

feast.infra.offline_stores.redshift_source module

class feast.infra.offline_stores.redshift_source.RedshiftLoggingDestination(*, table_name: str)[source]

Bases: feast.feature_logging.LoggingDestination

classmethod from_proto(config_proto: feast.core.FeatureService_pb2.LoggingConfig) feast.feature_logging.LoggingDestination[source]
table_name: str
to_data_source() feast.data_source.DataSource[source]

Convert this object into a data source to read logs from an offline store.

to_proto() feast.core.FeatureService_pb2.LoggingConfig[source]
class feast.infra.offline_stores.redshift_source.RedshiftOptions(table: Optional[str], schema: Optional[str], query: Optional[str], database: Optional[str])[source]

Bases: object

Configuration options for a Redshift data source.

classmethod from_proto(redshift_options_proto: feast.core.DataSource_pb2.RedshiftOptions)[source]

Creates a RedshiftOptions from a protobuf representation of a Redshift option.

Parameters

redshift_options_proto – A protobuf representation of a DataSource

Returns

A RedshiftOptions object based on the redshift_options protobuf.

to_proto() feast.core.DataSource_pb2.RedshiftOptions[source]

Converts an RedshiftOptionsProto object to its protobuf representation.

Returns

A RedshiftOptionsProto protobuf.

class feast.infra.offline_stores.redshift_source.RedshiftSource(*, name: Optional[str] = None, timestamp_field: Optional[str] = '', table: Optional[str] = None, schema: Optional[str] = None, created_timestamp_column: Optional[str] = '', field_mapping: Optional[Dict[str, str]] = None, query: Optional[str] = None, description: Optional[str] = '', tags: Optional[Dict[str, str]] = None, owner: Optional[str] = '', database: Optional[str] = '')[source]

Bases: feast.data_source.DataSource

created_timestamp_column: str
property database

Returns the Redshift database of this Redshift source.

date_partition_column: str
description: str
field_mapping: Dict[str, str]
static from_proto(data_source: feast.core.DataSource_pb2.DataSource)[source]

Creates a RedshiftSource from a protobuf representation of a RedshiftSource.

Parameters

data_source – A protobuf representation of a RedshiftSource

Returns

A RedshiftSource object based on the data_source protobuf.

get_table_column_names_and_types(config: feast.repo_config.RepoConfig) Iterable[Tuple[str, str]][source]

Returns a mapping of column names to types for this Redshift source.

Parameters

config – A RepoConfig describing the feature repo

get_table_query_string() str[source]

Returns a string that can directly be used to reference this table in SQL.

name: str
owner: str
property query

Returns the Redshift query of this Redshift source.

property schema

Returns the schema of this Redshift source.

static source_datatype_to_feast_value_type() Callable[[str], feast.value_type.ValueType][source]

Returns the callable method that returns Feast type given the raw column type.

property table

Returns the table of this Redshift source.

tags: Dict[str, str]
timestamp_field: str
to_proto() feast.core.DataSource_pb2.DataSource[source]

Converts a RedshiftSource object to its protobuf representation.

Returns

A DataSourceProto object.

validate(config: feast.repo_config.RepoConfig)[source]

Validates the underlying data source.

Parameters

config – Configuration object used to configure a feature store.

class feast.infra.offline_stores.redshift_source.SavedDatasetRedshiftStorage(table_ref: str)[source]

Bases: feast.saved_dataset.SavedDatasetStorage

static from_proto(storage_proto: feast.core.SavedDataset_pb2.SavedDatasetStorage) feast.saved_dataset.SavedDatasetStorage[source]
redshift_options: feast.infra.offline_stores.redshift_source.RedshiftOptions
to_data_source() feast.data_source.DataSource[source]
to_proto() feast.core.SavedDataset_pb2.SavedDatasetStorage[source]

feast.infra.offline_stores.snowflake module

class feast.infra.offline_stores.snowflake.SnowflakeOfflineStore[source]

Bases: feast.infra.offline_stores.offline_store.OfflineStore

static get_historical_features(config: feast.repo_config.RepoConfig, feature_views: List[feast.feature_view.FeatureView], feature_refs: List[str], entity_df: Union[pandas.core.frame.DataFrame, str], registry: feast.infra.registry.base_registry.BaseRegistry, project: str, full_feature_names: bool = False) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Retrieves the point-in-time correct historical feature values for the specified entity rows.

Parameters
  • config – The config for the current feature store.

  • feature_views – A list containing all feature views that are referenced in the entity rows.

  • feature_refs – The features to be retrieved.

  • entity_df – A collection of rows containing all entity columns on which features need to be joined, as well as the timestamp column used for point-in-time joins. Either a pandas dataframe can be provided or a SQL query.

  • registry – The registry for the current feature store.

  • project – Feast project to which the feature views belong.

  • full_feature_names – If True, feature names will be prefixed with the corresponding feature view name, changing them from the format “feature” to “feature_view__feature” (e.g. “daily_transactions” changes to “customer_fv__daily_transactions”).

Returns

A RetrievalJob that can be executed to get the features.

static offline_write_batch(config: feast.repo_config.RepoConfig, feature_view: feast.feature_view.FeatureView, table: pyarrow.lib.Table, progress: Optional[Callable[[int], Any]])[source]

Writes the specified arrow table to the data source underlying the specified feature view.

Parameters
  • config – The config for the current feature store.

  • feature_view – The feature view whose batch source should be written.

  • table – The arrow table to write.

  • progress – Function to be called once a portion of the data has been written, used to show progress.

static pull_all_from_table_or_query(config: feast.repo_config.RepoConfig, data_source: feast.data_source.DataSource, join_key_columns: List[str], feature_name_columns: List[str], timestamp_field: str, start_date: datetime.datetime, end_date: datetime.datetime) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Extracts all the entity rows (i.e. the combination of join key columns, feature columns, and timestamp columns) from the specified data source that lie within the specified time range.

All of the column names should refer to columns that exist in the data source. In particular, any mapping of column names must have already happened.

Parameters
  • config – The config for the current feature store.

  • data_source – The data source from which the entity rows will be extracted.

  • join_key_columns – The columns of the join keys.

  • feature_name_columns – The columns of the features.

  • timestamp_field – The timestamp column.

  • start_date – The start of the time range.

  • end_date – The end of the time range.

Returns

A RetrievalJob that can be executed to get the entity rows.

static pull_latest_from_table_or_query(config: feast.repo_config.RepoConfig, data_source: feast.data_source.DataSource, join_key_columns: List[str], feature_name_columns: List[str], timestamp_field: str, created_timestamp_column: Optional[str], start_date: datetime.datetime, end_date: datetime.datetime) feast.infra.offline_stores.offline_store.RetrievalJob[source]

Extracts the latest entity rows (i.e. the combination of join key columns, feature columns, and timestamp columns) from the specified data source that lie within the specified time range.

All of the column names should refer to columns that exist in the data source. In particular, any mapping of column names must have already happened.

Parameters
  • config – The config for the current feature store.

  • data_source – The data source from which the entity rows will be extracted.

  • join_key_columns – The columns of the join keys.

  • feature_name_columns – The columns of the features.

  • timestamp_field – The timestamp column, used to determine which rows are the most recent.

  • created_timestamp_column – The column indicating when the row was created, used to break ties.

  • start_date – The start of the time range.

  • end_date – The end of the time range.

Returns

A RetrievalJob that can be executed to get the entity rows.

static write_logged_features(config: feast.repo_config.RepoConfig, data: Union[pyarrow.lib.Table, pathlib.Path], source: feast.feature_logging.LoggingSource, logging_config: feast.feature_logging.LoggingConfig, registry: feast.infra.registry.base_registry.BaseRegistry)[source]

Writes logged features to a specified destination in the offline store.

If the specified destination exists, data will be appended; otherwise, the destination will be created and data will be added. Thus this function can be called repeatedly with the same destination to flush logs in chunks.

Parameters
  • config – The config for the current feature store.

  • data – An arrow table or a path to parquet directory that contains the logs to write.

  • source – The logging source that provides a schema and some additional metadata.

  • logging_config – A LoggingConfig object that determines where the logs will be written.

  • registry – The registry for the current feature store.

class feast.infra.offline_stores.snowflake.SnowflakeOfflineStoreConfig(*, type: Literal['snowflake.offline'] = 'snowflake.offline', config_path: Optional[str] = '/home/docs/.snowsql/config', account: Optional[str] = None, user: Optional[str] = None, password: Optional[str] = None, role: Optional[str] = None, warehouse: Optional[str] = None, authenticator: Optional[str] = None, database: pydantic.types.StrictStr, schema: Optional[str] = 'PUBLIC', storage_integration_name: Optional[str] = None, blob_export_location: Optional[str] = None)[source]

Bases: feast.repo_config.FeastConfigBaseModel

Offline store config for Snowflake

class Config[source]

Bases: object

allow_population_by_field_name = True
account: Optional[str]

Snowflake deployment identifier – drop .snowflakecomputing.com

authenticator: Optional[str]

Snowflake authenticator name

blob_export_location: Optional[str]

Location (in S3, Google storage or Azure storage) where data is offloaded

config_path: Optional[str]

Snowflake config path – absolute path required (Cant use ~)

database: pydantic.types.StrictStr

Snowflake database name

password: Optional[str]

Snowflake password

role: Optional[str]

Snowflake role name

schema_: Optional[str]

Snowflake schema name

storage_integration_name: Optional[str]

Storage integration name in snowflake

type: Literal['snowflake.offline']

Offline store type selector

user: Optional[str]

Snowflake user name

warehouse: Optional[str]

Snowflake warehouse name

class feast.infra.offline_stores.snowflake.SnowflakeRetrievalJob(query: Union[str, Callable[[], AbstractContextManager[str]]], snowflake_conn: snowflake.connector.connection.SnowflakeConnection, config: feast.repo_config.RepoConfig, full_feature_names: bool, on_demand_feature_views: Optional[List[feast.on_demand_feature_view.OnDemandFeatureView]] = None, metadata: Optional[feast.infra.offline_stores.offline_store.RetrievalMetadata] = None)[source]

Bases: feast.infra.offline_stores.offline_store.RetrievalJob

property full_feature_names: bool

Returns True if full feature names should be applied to the results of the query.

property metadata: Optional[feast.infra.offline_stores.offline_store.RetrievalMetadata]

Returns metadata about the retrieval job.

property on_demand_feature_views: List[feast.on_demand_feature_view.OnDemandFeatureView]

Returns a list containing all the on demand feature views to be handled.

persist(storage: feast.saved_dataset.SavedDatasetStorage, allow_overwrite: bool = False)[source]

Synchronously executes the underlying query and persists the result in the same offline store at the specified destination.

Parameters
  • storage – The saved dataset storage object specifying where the result should be persisted.

  • allow_overwrite – If True, a pre-existing location (e.g. table or file) can be overwritten. Currently not all individual offline store implementations make use of this parameter.

supports_remote_storage_export() bool[source]

Returns True if the RetrievalJob supports to_remote_storage.

to_remote_storage() List[str][source]

Synchronously executes the underlying query and exports the results to remote storage (e.g. S3 or GCS).

Implementations of this method should export the results as multiple parquet files, each file sized appropriately depending on how much data is being returned by the retrieval job.

Returns

A list of parquet file paths in remote storage.

to_snowflake(table_name: str, temporary=False) None[source]

Save dataset as a new Snowflake table

to_spark_df(spark_session: SparkSession) DataFrame[source]

Method to convert snowflake query results to pyspark data frame.

Parameters

spark_session – spark Session variable of current environment.

Returns

A pyspark dataframe.

Return type

spark_df

to_sql() str[source]

Returns the SQL query that will be executed in Snowflake to build the historical feature table.

feast.infra.offline_stores.snowflake_source module

class feast.infra.offline_stores.snowflake_source.SavedDatasetSnowflakeStorage(table_ref: str)[source]

Bases: feast.saved_dataset.SavedDatasetStorage

static from_proto(storage_proto: feast.core.SavedDataset_pb2.SavedDatasetStorage) feast.saved_dataset.SavedDatasetStorage[source]
snowflake_options: feast.infra.offline_stores.snowflake_source.SnowflakeOptions
to_data_source() feast.data_source.DataSource[source]
to_proto() feast.core.SavedDataset_pb2.SavedDatasetStorage[source]
class feast.infra.offline_stores.snowflake_source.SnowflakeLoggingDestination(*, table_name: str)[source]

Bases: feast.feature_logging.LoggingDestination

classmethod from_proto(config_proto: feast.core.FeatureService_pb2.LoggingConfig) feast.feature_logging.LoggingDestination[source]
table_name: str
to_data_source() feast.data_source.DataSource[source]

Convert this object into a data source to read logs from an offline store.

to_proto() feast.core.FeatureService_pb2.LoggingConfig[source]
class feast.infra.offline_stores.snowflake_source.SnowflakeOptions(database: Optional[str], schema: Optional[str], table: Optional[str], query: Optional[str], warehouse: Optional[str])[source]

Bases: object

Configuration options for a Snowflake data source.

classmethod from_proto(snowflake_options_proto: feast.core.DataSource_pb2.SnowflakeOptions)[source]

Creates a SnowflakeOptions from a protobuf representation of a snowflake option.

Parameters

snowflake_options_proto – A protobuf representation of a DataSource

Returns

A SnowflakeOptions object based on the snowflake_options protobuf.

to_proto() feast.core.DataSource_pb2.SnowflakeOptions[source]

Converts an SnowflakeOptionsProto object to its protobuf representation.

Returns

A SnowflakeOptionsProto protobuf.

class feast.infra.offline_stores.snowflake_source.SnowflakeSource(*, name: Optional[str] = None, timestamp_field: Optional[str] = '', database: Optional[str] = None, warehouse: Optional[str] = None, schema: Optional[str] = None, table: Optional[str] = None, query: Optional[str] = None, created_timestamp_column: Optional[str] = '', field_mapping: Optional[Dict[str, str]] = None, description: Optional[str] = '', tags: Optional[Dict[str, str]] = None, owner: Optional[str] = '')[source]

Bases: feast.data_source.DataSource

created_timestamp_column: str
property database

Returns the database of this snowflake source.

date_partition_column: str
description: str
field_mapping: Dict[str, str]
static from_proto(data_source: feast.core.DataSource_pb2.DataSource)[source]

Creates a SnowflakeSource from a protobuf representation of a SnowflakeSource.

Parameters

data_source – A protobuf representation of a SnowflakeSource

Returns

A SnowflakeSource object based on the data_source protobuf.

get_table_column_names_and_types(config: feast.repo_config.RepoConfig) Iterable[Tuple[str, str]][source]

Returns a mapping of column names to types for this snowflake source.

Parameters

config – A RepoConfig describing the feature repo

get_table_query_string() str[source]

Returns a string that can directly be used to reference this table in SQL.

name: str
owner: str
property query

Returns the snowflake options of this snowflake source.

property schema

Returns the schema of this snowflake source.

static source_datatype_to_feast_value_type() Callable[[str], feast.value_type.ValueType][source]

Returns the callable method that returns Feast type given the raw column type.

property table

Returns the table of this snowflake source.

tags: Dict[str, str]
timestamp_field: str
to_proto() feast.core.DataSource_pb2.DataSource[source]

Converts a SnowflakeSource object to its protobuf representation.

Returns

A DataSourceProto object.

validate(config: feast.repo_config.RepoConfig)[source]

Validates the underlying data source.

Parameters

config – Configuration object used to configure a feature store.

property warehouse

Returns the warehouse of this snowflake source.

Module contents