Skip to content

Onesignal Api

dirksteynberg edited this page Mar 16, 2022 · 1 revision

Onesignal CSV Export & View Notification Api

Overview

OneSignal's server API in turbo_stream provides robust features that can be used to:

  • Run CSV_Export requests
  • Run View_notifications requests

Configuration

Param Description Example
endpoint View the details of multiple notifications or generate a compressed dataset export of all of your current user data. "csv_export" or "view_notification"
credentials The key value pair of the app_id and app_key in json or yaml format. {"api_key": "xxxxx", "app_id": "xxxxxxxxx-xxxxx-xxxxx-xxxxx-xxxxxxxxxx" }
credential_file_fmt De-serialization of the credentials file. "json", "yml" or "yaml"

Path Parameters

Parameter Type Description
id String Required: Notification ID
app_id String Required: App ID

Example

An example of the configuration object could look like:

{
  "endpoint": "csv_export"
}

Setting up a python pipeline:

import json

from turbo_stream.onesignal.reader import OnesignalReader


def main():
    csv_export_reader = OnesignalReader(
        configuration={
            "endpoint": "csv_export"
        },
        credentials="onesignal_creds.yml",
        credential_file_fmt="yml"
    )

    csv_export_data = csv_export_reader.run_query()  # start the above query
    print(csv_export_data)  # option to return the response object as a flat json structure

    # current option to write to AWS s3 exists, key file extension is supported
    csv_export_reader.write_data_to_s3(bucket="my-bucket", key="path/data.json")
    csv_export_reader.write_data_to_s3(bucket="my-bucket", key="path/data.csv")
    csv_export_reader.write_data_to_s3(bucket="my-bucket", key="path/data.parquet")

    # anything else will be written as a blob with its given extension

    # additional option to partition data before writing to s3
    # this allows users to write file names into a bucket grouped by a given field
    # commonly a date field, as this allows to write to s3 without creating duplication
    csv_export_reader.write_partition_data_to_s3(bucket="my-bucket", path="my/path", partition="created_at", fmt="json")
    csv_export_reader.write_partition_data_to_s3(bucket="my-bucket", path="my/path", partition="created_at", fmt="csv")
    csv_export_reader.write_partition_data_to_s3(bucket="my-bucket", path="my/path", partition="created_at",
                                                 fmt="parquet")

    view_notification_reader = OnesignalReader(
        configuration={
            "endpoint": "view_notification"
        },
        credentials="onesignal_creds.yml",
        credential_file_fmt="yml"
    )

    view_notification_data = view_notification_reader.run_query()  # start the above query
    print(view_notification_data)  # option to return the response object as a flat json structure

    # current option to write to AWS s3 exists, key file extension is supported
    view_notification_reader.write_data_to_s3(bucket="my-bucket", key="path/data.json")
    view_notification_reader.write_data_to_s3(bucket="my-bucket", key="path/data.csv")
    view_notification_reader.write_data_to_s3(bucket="my-bucket", key="path/data.parquet")

    # anything else will be written as a blob with its given extension

    # additional option to partition data before writing to s3
    # this allows users to write file names into a bucket grouped by a given field
    # commonly a date field, as this allows to write to s3 without creating duplication
    view_notification_reader.write_partition_data_to_s3(bucket="my-bucket", path="my/path", partition="created_at",
                                                        fmt="json")
    view_notification_reader.write_partition_data_to_s3(bucket="my-bucket", path="my/path", partition="created_at",
                                                        fmt="csv")
    view_notification_reader.write_partition_data_to_s3(bucket="my-bucket", path="my/path", partition="created_at",
                                                        fmt="parquet")


if __name__ == '__main__':
    main()

Features

turbo-stream comes with detailed logging functionality so that all pipelines can be tracked via logs. The request object has functionality to stagger requests to reduce rolling quota issues, as well as retry support for common Onesignal errors, with the opportunity to attempt to re-run the query 5 times before failing. This is common across all vendors.

Comments

For CSV Export the file may take several minutes to generate depending on the number of users in your app.

Clone this wiki locally