メインコンテンツまでスキップ

Create an AWS Redshift connector

Last updated on July 10, 2024

Introduction

You can create and configure an AWS Redshift connector in the AccelByte Gaming Services (AGS) Admin Portal to ensure seamless integration between your data streaming workflows and Amazon Web Services (AWS) Redshift.

Prerequisites

  • Access to the AGS Admin Portal.
  • You have created roles, users with necessary privileges, and databases in AWS Redshift. See Configure Redshift permission setup to learn how. This ensures that the AWS Redshift connector has the required permissions to access your Redshift data.

Create an AWS Redshift connector

  1. On the Admin Portal sidebar, go to ADD-ONS > AIS > Data Connector.

    Image shows navigating to the Analytics Connector page

  2. On the Data Connector page, click the Create Connector button. The Create Connector page appears.

  3. In the General Configuration section, fill in the required information:

    • Platform: select Redshift from the dropdown.

    • Connector Name: type in a name for the connector. Note that after the connector is created, a randomized set of numbers will be added at the end of the name your provided, which will be preceded by a dash. For example, "AAAConnect" will be "AAAConnect-123456" after the connector is created.

    • Redshift URL: type in the host name of the Redshift cluster.

    • Port: specify the port number of the Redshift cluster.

    • DB Username: type in the username that will be used to authenticate with the database.

    • Database Name: type in the name of the redshift database.

    • Table Model: choose how the table will be created. Choose between:

      • Single: all events will be inserted into one table based on the event type.
        • Example topics:
          • analytics_game_telemetry.dev.lightfantastic.gameStarted
          • analytics_game_telemetry.dev.lightfantastic.gameEnded
        • Expected table (only has one table with table format schema.table_name):
          • public.game_telemetry_dev
      • Mapping: events will be inserted into multiple tables based on the topics.
        • Example topics:
          • analytics_game_telemetry.dev.lightfantastic.gameStarted
          • analytics_game_telemetry.dev.lightfantastic.gameEnded
        • Expected table (multiple tables based on topics with table format schema.table_name):
          • lightfantastic.gameStarted
          • lightfantastic.gameEnded
    • Flatten Column: choose how the column is created. Choose between:

      • False (recommended for better performance): all events will be inserted into one column.

        • Example event:

          {
          "EventNamespace": "lightfantastic",
          "EventTimestamp": "2023-07-20T03:30:00.036483Z",
          "EventId": "d110582c54804a29ab1d95650ca4c644",
          "Payload": {
          "winning": true,
          "hero": "Captain America",
          "kill": 9,
          "network": 912.27,
          "item": [
          {
          "name": "vibranium shield",
          "defense": 10,
          "attack": 1
          },
          {
          "name": "mjolnir hammer",
          "defense": 1,
          "attack": 9
          }
          ]
          },
          "EventName": "gameEnded"
          }
        • Expected column:

          events
          {"EventNamespace":"accelbyte","EventTimestamp":"2023-07-20T03:30:00.036483Z","EventId":"d110582c54804a29ab1d95650ca4c644","Payload":{"winning":true,"hero":"Captain America","kill":9,"network":912.27,"item":[{"name":"vibranium shield","defense":10,"attack":1},{"name":"mjolnir hammer","defense":1,"attack":9}]},"EventName":"gameEnded"}
      • True: all events will be inserted into multiple columns, based on event property.

        • Example event:

          {
          "EventNamespace": "lightfantastic",
          "EventTimestamp": "2023-07-20T03:30:00.036483Z",
          "EventId": "d110582c54804a29ab1d95650ca4c644",
          "Payload": {
          "winning": true,
          "hero": "Captain America",
          "kill": 9,
          "network": 912.27,
          "item": [
          {
          "name": "vibranium shield",
          "defense": 10,
          "attack": 1
          },
          {
          "name": "mjolnir hammer",
          "defense": 1,
          "attack": 9
          }
          ]
          },
          "EventName": "gameEnded"
          }
        • Expected column:

          eventideventnamespaceeventtimestampeventnamepayload_itempayload_killpayload_winningpayload_networkpayload_hero
          "d110582c54804a29ab1d95650ca4c644""accelbyte"2023-07-20T03:30:00.036483Z"gameEnded"[{"defense":10,"attack":1,"name":"vibranium shield"},{"defense":1,"attack":9,"name":"mjolnir hammer"}]9true912.27"Captain America"
    • Flush Interval: set the maximum time interval in milliseconds in which the data will be periodically written into Redshift. The flush interval range is between one and 15 minutes.

    • Flush Size: set the maximum number of events that will be written into Redshift. The flush size range is between 100 and 1000.

    注記

    Data will be sent depending on which condition is reached first between flush interval or flush size.

  4. Click Next to go to the Data Filtering section.

  5. In the Data Filtering section, fill in the required information:

    • Event: choose which event type will be the source:
      • Game Telemetry: Custom telemetry events that are sent from game clients (Custom Telemetry).
      • AccelByte Event: System-generated events from AccelByte services (Service Telemetry).
    • Filter: click the Add Filter button. The Add Filter pop-up form appears. You can add specific namespaces and Kafka topics that you want to stream from the source services. You can also select all namespaces and/or all topics. This ensures only relevant data is transferred. Click Add to create create and save the filter.
  6. Click Next to go to the Role, ARN, & Policy Configuration section.

  7. Follow the steps that you see on the Role, ARN, & Policy Configuration section. You must implement these steps on the AWS Redshift end in order to activate the connector.

    important
    • Ensure that you have completed all the steps in this section before clicking Save. Otherwise, the connector will fail to activate.
    • If you don't want to activate the connector yet, you have the option to save it as a draft. Without completing the steps in this section, click Save. An error message will appear, stating that the connection failed. Click on Do It Later to save the connector as a draft. You can activate it at a later time.
  8. After completing the required steps in AWS Redshift, go back to the AGS Admin Portal and click Activate. The Admin Portal will establish the connection between the connector and Redshift. The details page of the connector will then appear, with the connector status set to "ACTIVE."