Skip to main content

Coins Analytics Guide

J
Written by Jen Johnson
Updated over 2 weeks ago

Data Replication Overview

An overview of the components and the architecture of Coins Analytics on Azure is shown below:

  • Coins runs on UNIX (on premise or in cloud)

  • Replication agents periodically run and extract datasets from Coins and push them to SQL database, through a data gateway

  • Datasets can be defined in Coins (similar to the datasets as used for reports, charts, snapshots, pivots, etc.)


Data Gateway

The Data Gateway is normally installed and deployed by the technical team at Access Coins.


System Setup

Once Data Gateway is deployed, system parameters that control data extract process must be set. Many of the parameters related to this process are determined by the Technical Resource when the gateway is configured for the first time. However, there are a few parameters that need to be set based on individual business requirements. These parameters are:

  • EXCALSDATE – The calendar start date. A set of records starting on this date will be created. This is for an Entity called SYCalendar hold dates based on a start date provided in this parameter.

  • EXCALWKDAY – The number of the day of the week considered to be the start of the week. 1-Sunday,7-Saturday. The default is Sunday.

  • EXCALYEARS – How many future years of the calendar should be generated in SYCalendar

  • EXHIST – The number of days of Change Data Capture history to keep allowing for disaster recovery. Default is 2 days, which should be enough.

  • EXKCOS – A comma separated list of companies to extract data from.

  • EXPREFIX – Prefix to be used added to the start of the Entity name to create the extract and dataset records. This allows you to ensure that any datasets created for replication do no conflict or overwrite any other datasets that you may have and keeps all the extract datasets together.

  • EXMON - The email address to be emailed in the event of the replication process failing.


Extracts

Once the required system parameters are setup for applying Coins Analytics on Azure (SQL Replication), "Extracts" will become available on the menu.

Entity Maintenance

All Entities (i.e., Tables) that are available to be extracted and loaded to SQL Database exist in this area. Entities are prefixed with a module code e.g., APVendor, JCJob.

It is possible to Generate extracts, build SQL tables and run extract data from Entity maintenance screen. Extracts can also be CDC enabled and / or CDC disabled in this page by running one of the functions available under "Generate Extract" which is pictured at the bottom right-hand corner of the screen. Once an entity is generated and/or its corresponding SQL table is built and/or CDC enabled, a tick in its column will appear.

NOTE:

  1. Be aware of any Entities that contain an * in front of the Label. These Entities may not be complete. The * is triggered by either not having a diagram created or a blank description in the Entity (these Entities were automatically generated from the previous ETL solution and have not been reviewed).

  2. There are two entities, "PTPeople" and "PTPeople-C" that both must be generated since they are UNIONed and the SQL table "PTPeople" is generated based on the result of this UNION. If you don not generate both entities the resulting table in SQL will not be correct and the Reconciliation report will not provide the correct count results.

Columns

For each Entity, columns are automatically added for "last extract", "last update" (in ERP+) and "when created" (in ERP+). Columns are also automatically created for "join ID fields". Column names generally follow Coins column names and are unique unless they are the same across multiple tables. Fields prefixed with x are considered custom, all others are standard. It is possible to suppress creation of a field or fields in the SQL table by ticking in Suppress column.

Note: Any change in an entity set up will not replicate in its SQL table unless the entity is re-generated, its SQL table is re-built, and the data should be re-extracted.

Relationships

Relationships get created automatically using the Primary and Foreign keys in each Entity. Custom relationships can be created to join tables to other tables, this can be done based on suggested relationships. To create Custom relationships, the relationship ID must begin with an X.

Diagrams

The Diagram tab lists all the relationship diagrams for a given Entity.

Triggers

Triggers for each Entity are defined to link the CDC trigger (Change Data Capture) with the Entity. The source table is essential for this link which is done automatically on creation.

Other Triggers should be defined and linked where other tables/records will affect the field value on this Entity. The Triggers should then be linked to the fields that they effect (this may be required in the future to provide field level data refresh).

The fields required from CDC trigger should match the key field selection criteria for the Entity. Typically, this will be the primary key fields.

Note: The following steps must be taken if a new trigger should be added to the existing triggers of a custom entity:

  1. Add new trigger / triggers to the "Triggers" tab of the entity.

  2. In "Detail" tab select the new trigger / triggers by ticking the box and then saving the entity.

  3. Re-generate the entity to link its corresponding extract to all triggers defined for this entity.

Views

Specific views for an Entity can be defined and created. Views can be subset of rows and/or subset of columns with a Condition that is a partial SQL query. Column labels can be overridden by specifying values in Fields Label Overrides.

When creating your own bespoke entities do not duplicate Label names. Label duplication will cause an error in Generating the Extract. All Labels must be discrete in an Entity.

Indexes

  • Indexes should be defined to support views or other desirable access to records in the table.

  • Indexes are automatically created to support the joins defined in relationships.

  • When creating an index for the Entity the format required will be:

    • Name – A name of the Index e.g.: Index1

    • Unique – is the Index unique? If yes then tick the box for "Unique", otherwise leave blank

    • Columns – enter the columns (fields) that are to make up the Index and enter 0 (Ascending) or 1 (descending) e.g.: kco,0,job_num,0,vwb_code,0

Export / Import Entities

Coins recommend any development and testing of custom entities happen in a non-live analytics environment first, once fully tested, these entities should be deployed to LIVE environment and become LIVE.

Once custom entities are ready to go live, they can be exported from an environment and then imported to the other environment. It is possible to select more than one entity, export them and then import them all to another environment.

  1. Select the Entity / Entities.

  2. Select Export Entities option then click the action arrow.

  3. Copy the entity data from the "Export Entities" page.

  4. Navigate to the "Entity Maintenance" in the other environment into which these entities should be imported.

  5. Click on the Options menu in the top right-hand corner of the "Entity Maintenance" screen.

  6. Select "Import Entities".

  7. Paste entity data copied from the source environment and save it.

  8. Entities will appear in the entity maintenance.


Column Update

This allows you to see columns across all Entities instead of going to each individual Entity.


Diagram Maintenance

Coins Data model can be visualised via Diagrams. This in turn allows data analyst to browse, navigate and discover Coins Data Dictionary and Meta data that describes the available Coins data.

Every Entity at least appears in one diagram and all diagrams can be maintained in this place. The description of the diagram can be used as a guide, and it shows what each diagram contains.


Extract Parameters

Defining Extract parameters enable you to customise the Data Model where it cannot be derived from existing parameters. It is particularly useful when the existing parameters are company specific, and the extracts need to be cross company.

  • %CSCRTITEMID - Certificate Item IDs

  • %FAFAAANAL – A Pipe separated list of Fixed Asset Analysis codes in use.

  • %GLANAL - A pipe separated list of GL Account Analysis codes in use. Each entry consists of an analysis label. If blank, then the analysis field will not be built.

  • %GLEL - A pipe separated list of GL elements in use. Each entry consists of an element label. If blank, then element field will not be built.

  • %GLEL1ANAL - A pipe separated list of GL Element 1 analysis set descriptions to be used (up to eight). If the entry is blank (or not entered) then the analysis field will be omitted from the extract.

  • %GLEL2ANAL - A pipe separated list of GL Element 2 Analysis set descriptions to be used (up to eight). If the entry is blank (or not entered) then the analysis field will be omitted from the extract.

  • %GLEL3ANAL - A pipe separated list of GL Element 3 Analysis set descriptions to be used (up to eight). If the entry is blank (or not entered) then the analysis field will be omitted from the extract.

  • %GLEL4ANAL - A pipe separated list of GL Element 4 Analysis set descriptions to be used (up to eight). If the entry is blank (or not entered) then the analysis field will be omitted from the extract.

  • %GLTOCUTOFF! – A cut-off date in ISO date format after which GL transactions will be extracted

  • %HBVUDANAL - A pipe separated list of house type Analysis set labels. Note if an entry is blank then that analysis set field will not be available on the house type record.

  • %HBVWBANAL - A pipe separated list of plot Analysis set labels. Note if an entry is blank then that analysis set field will not be available on the plot record.

  • %INFSTSTAGE - The name to be used for the first forecast date field in the plot inspection forecast. Associated House Sales Parameter is HS/FSTSTAGE (Current Value: Pre-Forecast) The name to be used for the initial stage in the plot inspection forecast; for example, Round Robin.

  • %JCBUDGETS - JC budget types that should be extracted.

  • %JCCAT – A Pipe separated list of JC Category Labels. Each entry in the list is a comma separated list of Category Number and Category Description. Category Number and Category Description should match the JC Categories defined in /Setup/Configuration/JC Configuration.

  • %JCTOCUTOFF! - A cut-off date in ISO date format after which JC transactions will be extracted

  • %PLAVMANAL - A pipe separated list of Supplier analysis set labels. Note if an entry is blank then that analysis set field will not be available on the Supplier record.

  • %SLRCMANAL - A pipe separated list of Customer analysis set labels. Note if an entry is blank then that analysis set field will not be available on the Customer record.

  • %SYBULKLIM - The maximum size of the BULK INSERT batch. This should be set depending on the performance of the target SQL database. It should be low enough so that the BULK INSERT command does not time out.

  • %SYDBUSER - The DB User ID of the process that is writing records to SQL. This user will be mapped to the SYSAdmin user for the purposes of security to give them access to all records.

  • %SYDELETELIM - Maximum size of the DELETE batch. This should be set based on the performance of the target SQL Database and It should be low enough so that the DELETE command does not time out.

  • %SYINSERTLIM - The threshold of the number of records after which a BULK INSERT should be used. If there are fewer records than the threshold then multiple INSERT commands are used. This value can be tuned depending on the performance of the target SQL DB.

  • %SYCHILDDEL - A comma separated list of tables that require their child records to be deleted first due to large record sets.

  • %SYNORLS – Suppress row level security in the SQL DB. This is recommended to set to Y now to give the best flexibility for embedding analytics with row level security.

  • %SYKCOFORMAT – The format of "kco" field which is default to 999. Change to the format is not recommended.

  • %SYREPCUR – To set the Currency Code for data extraction. The selected currency should be the base or one of the alternate base currencies for all the companies included in data extracts.

  • %SYUSECUR – A pipe separated list of currency number, label, and field extension. Up to 6 currencies can be defined and specified which currency values are to be built in the extract. For example, setting this parameter to values "1, Document,_Doc|2,Base,_Base" will create fields Balance_Doc and Balance_Base in the corresponding SQL table.

  • %SYBULKDIR - The directory into which the bulk insert data files are stored. If omitted, then the working directory of the report agent is used. Expected value is $BASE/var/home/XXX where XXX is either a username or a shared folder so that the files are visible in My Files.

  • %SYRLS – Default row level security is N, i.e., no security policy. This parameter replaces previous extract parameter, %SYNORLS. If security policies are required, this should be set to Y.


Generate Extracts

Once Entities are tailored, the Data model must be generated/compiled by running "Generate Extracts". This in turn creates customized datasets and customized extract definitions.


Compile Datasets

Sometimes it is required to compile datasets after installing "Analytics' fixes", in order to force the new code to affect the system.

Running "Compile datasets" can be done from either:

  • System >> Extracts >> "Compile Datasets" → list the extracts that their datasets must be compiled or leave to default "*" if all datasets must be compiled

or

  • System >> Extract >> "Extract maintenance" >> select the extracts and action "Compile datasets" function from the drop-down menu


Extracts Maintenance

When Entities are generated, corresponding Datasets and Extracts will be created. The extracts and datasets are named the same as the Entity with a prefix added, that is based on the value defined in the system parameter "EXPREFIX".

Various actions are available, to Build/Drop SQL tables, Extract Data to SQL and recreate views.

"Build SQL" action allows you to Create Tables, Drop Tables, Drop and Create Tables, and Rebuild Views.

The corresponding SQL tables will be created by running "Create tables", these will initially be blank tables. The initial load, i.e., running "Extract Data", must be performed to transfer existing data across to SQL and populate the newly created tables.


Changed Data Capture (CDC)

There is a trigger for each Entity that gets created automatically with a source table and its unique key fields. The Standard Triggers are all prefixed with %.

Real time replication of data to SQL happens by enabling CDC for extracts. By enabling CDC, making any change to a record in ERP+ will fire its trigger, and capture the changed record based on the fields defined in CDC. This in turn will send a MERGE SQL command via the extract agent/queue to update the record in SQL table. This in turn will enable SQL to update existing, insert new and delete records as appropriate.


Extract Data

This will allow user to extract data from Coins Database and load them to associated SQL Table.

BULK INSERT method is used for extracting tables with high volume of data from Progress Database to SQL database on Azure. When BULK INSERT is used, various bulk files will be created. These reside in the working directory of report agent, by default. This directory often is not accessible to the end user.

When transferring data to SQL fails due to data issues it is often useful to be able to inspect data files. For end users to be able to inspect these data files the Extract parameter "%SYBULKDIR" should be set.

It is important to consider the amount of disk space that will be required in this temporary directory. All the data for an extract will be dumped to text files and this may be many GB of data. If %SYBULKDIR is set, then this directory should have enough spare capacity to receive these files.

EXTRACT parameter %SYBULKDIR can be set to a specific directory where the bulk insert data files should be created. It is expected that this will be a user's home directory or another shared folder so that the files can be viewed and maintained in My Files function.

If a bulk insert works correctly then bulk files are created temporarily and are cleaned up on successful insert of the data to SQL. They are only left on UNIX if the process fails.


Financial Date Extract Data

This is a new convenience report that will run the Extract Data report multiple times, once for each period in a range of periods. This is particularly useful if you have a history type extract and wish to populate several periods of historical data.

The Financial Date Extract Data report simply runs the Extract Data report multiple times by generating suitable entries in the report queue.


View Diagrams

Coins Data Model can be visualised in this place, no editing is allowed.


Workspaces

Creation and maintenance of workspaces, publishing Power BI files to workspaces, viewing list of published reports and datasets can now be done from within ERP+ as well as power BI services.

Once a new workspace is created, the power BI files can be uploaded to the workspace from the Upload tab within the workspace. You can also load and replace an existing Power BI file and/or remove the published datasets and reports from a workspace.


Blob Files

The bulk data files created during running "Extract Data" are pushed to the BLOB storage container in Azure before being inserted to SQL. These files are temporary both on UNIX and blob storage and if the insert to SQL is successful then the BLOB files are removed from ERP+ and blob, at the end of the process.

If there is a failure, it is necessary to inspect the existence of files and/or the content of the files in BLOB storage. These files can be accesses via Azure portal with appropriate permissions for the user.

This screen allows the files to be listed, viewed (downloaded) and deleted from Blob container, from within ERP+ without the need to Azure portal credentials.

This place should be inspected periodically to tidy any files that might have been left around in the blob storage if running extract data failed. This can also be done from Azure portal by an administrator if required.


SQL Editor

Occasionally it is required to administer the analytics extract process by running a query on SQL Database to check that the data has been transferred. Querying the SQL database can be done from within ERP+ without a need to connect to SQL Server management studio. This is helpful if a user does not have access to SQL server.

The query result can be exported to excel spreadsheet, should that be required.


Extract Monitor

Real time replication of data to SQL happens by an extract agent which can be monitored in "Extract Monitor" browser in ERP+. The extract agent/queue will be in one of the below colours:

  • Green – The agent is running

  • Red – The agent is not running

If replication process fails, the agent will show in red, and a notification email will be sent to the email address that SY/EXMON is set to. At the event of replication failure all records related to that agent will queue up.

Extract Queue

When Coins analytics is set up there will be one extract agent/queue that is used for real time replication of data. It is possible to set up more than one queue and use it for replication of certain extracts within a module, but it should be done within certain considerations.

  1. Create a new queue

    1. Navigate to System >> System setup >> Background process configuration / Extract queue tab, add a new queue.

      • Queue: Name of the queue

      • Description: Full name of the queue

      • Poll: Number of seconds after which new records will be captured by this queue

      • Auto: Whether to automatically start each time that system starts up

      • Status: It shows whether the agent is running

      • Start / Stop: these can be used to start and or stop the agent manually

      • Log: All transactions that has been captured and replicated by this queue will write to this log and can be used for any investigation when required.

    2. Go to "Processes tab", click on "Open" button and then save it.

      (Note: This is a compulsory step which enables the agent to start.)

      This new queue will automatically be added to "Extract Monitor" browser.

  2. Required Extracts must be linked to the new queue

    1. Stop all agents

    2. For existing extracts that their data are already replicating, go to "Extract maintenance" Change the queue for required extracts and save them.

    3. For new entity, before running "Generate extract" change the queue to the new one if required, then generate.


Extract Reconciliation Reports

There are six standard reports that can be run at any time.

Extracts Test

Running this report, at the time of initial set up, will help to identify which parameters have the wrong values, if there are any problem in communication between ERP+ and Data Gateway.

Extracts Reconciliation

This report highlights any discrepancies between count of data in Coins database and SQL database. It is recommended to schedule and run this report every night, out of working hours, to ensure data in both databases reconcile.

Once Analytics is enabled on the LIVE environment and all the required entities are set up for replication, it's strongly recommended to schedule this report to run daily.

To avoid any delays or processing issues, it's best to schedule it outside of regular business hours, when the Extract queue is likely to be clear.

Running this report regularly helps with ongoing monitoring and makes it easier to keep data reconciliation accurate and up to date.

Note:

  • Running this report for All companies will also populate SQL table "SYRecon" if the table had been created. i.e., "SYRecon" table must have already been created if you require data to be pushed to this SQL table.

  • If the report is run for all tables, i.e., "Tables is set to *, running the report will delete SYRecon and recreates it. If it is run for a subset of tables, e.g., AP*, HS* the value of those tables in SYRecon table will only be updated.

  • Partial Reconciliation feature allows the reconciliation report to be run for a subset of records in both systems. This enables a quicker reconciliation to be performed. Currently, GLBatch, GLTrans, GLTrans$, JCCosttran, and JCCosttran$ entities support Partial Reconciliation. The initial condition for reconciliation is the date greater than the start of the current GL Financial year (1st of the 2 GL financial years).

    • Note: Regenerate of entity is required to add the partial reconciliation query to its corresponding extract.

  • Run the Reconciliation Report for one of the above entities, select partial Reconciliation check box. The Report will return the record count from the start of the current GL Year.

Extract Query Reconciliation

New ERP+ programs have been introduced to run a query and aggregate the data in ERP+ tables as well as the SQL tables. The results can then be compared using program "syrex02.addcheck". This will enable users to check if the total value of data in both databases reconcile.

These programs should be saved in a calculation program and then be used in "Extract Query Reconciliation" report.

Detailed Reconciliation

This report enables users to discover any missing data and discrepancy of two databases any time specially during analytics implementation when required.

This report runs a query on a selected table, groups and counts the records by the provided columns. The selected columns for grouping the records MUST be database fields and cannot be calculations in ERP+. You can specify as many columns as needed to give you the count analysis required.

The "companies" selection and the date selections should be filled in exactly as they are on the Extract data page.

Note: This new count requires the extract datasets to be recompiled before start of using the report. If datasets are not re-compiled before running the report, it will fail.

GL Account Balance Reconciliation

This report shows each GL account's movement and balance, both in ERP+ and in SQL, for the selected period at run time. If there are any differences between ERP+ and SQL, it will highlight it with an asterisk for the account.

This report can be run either for all companies using the current GL period in each company, or for a single company and a range of periods.

JC Contract Balance Reconciliation

This report calculates and compares Costs and Revenues of each contract both in ERP+ and in SQL, for the selected period at run time. It can also select to run only for Active/Incomplete contracts to reduce the amount of time taken to reconcile.

If there are any differences between ERP+ and SQL, it will highlight it with an asterisk for the contract.


Extracting Data to SQL

The following steps should be taken to extract data from Coins on UNIX and load them to SQL database on Azure.

Generate Entities

All required Tables/Entities must be generated first. This in turn will create corresponding Data sets and Extracts, but at this stage no data is loaded in SQL Database yet. It is important to Generate System Entities first before generating other Entities, this is to create required security, "%SYSecurity" and "%SYSecurityList", for generating the rest of the Entities.

These steps must be followed:

  1. Select all the Entities that start with SY.

  2. Run the Generate Extracts action.

  3. A report will be generated for this process. This report should be reviewed to ensure there are no errors.

    1. This process will create records in the Extracts Maintenance area where all extracts are prefixed with X_.

    2. Data sets are also automatically created in Data Set Definitions at this point.

  4. Generating Entities will create corresponding Datasets and Extracts.

Once generating SY* tables are completed, repeat steps 1 to 3 above for generating the rest of the required tables.

Extract Maintenance

All generated Entities have a corresponding Extracts that are available in "Extract Maintenance". Next step for loading data to SQL is to create and build tables in SQL database. it is important to build and extract system tables before the rest of the tables.

  1. Check all the extracts that start with X_SY (assuming EXPREFIX is set to "X_")

  2. Run the "Build SQL" action from the menu, this runs in the background and generates a report. It is recommended to produce this report when building multiple extracts at once.

    1. Select the "Create Tables" option. This will create the corresponding tables and views in SQL Database.

    2. When building tables and views are completed, each extract should have its Built box checked.

  3. Run the "Extract Data" action from the menu. This process runs the data sets, extract the data from Coins Database and then loads the results to SQL Database. The column In SQL will now have a check mark in it.

Once System records are extracted, repeat process 1 to 3 above for the rest of the tables that are required in SQL.

It is possible to drop SQL tables and views and rebuild them. To drop SQL tables, steps below should be followed:

  1. Check all the extracts that should be deleted.

  2. Run the "Drop SQL" action from the menu. This will drop SQL tables and views from SQL database. Running the action will remove the ticks from the extracts.

It is sometimes required to keep history of data, e.g. snapshot of JCJobvalue per period. For this requirement we can define an extract to have "DayHistory / Week History / Period History" type and then Build SQL table. Running "Build SQL" will create two SQL tables which one of them will have suffix of "Hist" at its name.

Example: Running Build SQL for "JCJobVal" with type = "Period History" creates two tables, one is called "JCJobval", and the other one is called "JCJobValHist" table.

  • Running Extract data at each period will append the data to the previous periods in "JCJobValHist" table.

  • Running extract data will delete data from "JCJobVal" table and replace it with the data of the period which was run.


Data Refresh

There are three distinct methods of refreshing tables in SQL after initial data load, Change Data Capture (CDC), Daily Extract and Scheduled Refresh.

  • Change data Capture (CDC) – Real time synchronisation of data happens for extracts that "CDC Active" box is ticked for them.

    • On "Extract Maintenance" select entity/entities that should be CDC enabled

    • Click on Multi Update button

    • Tick "CDC Active" box and save

  • Daily Extract - Extracts can be set to be refreshed daily, so that every day when the extract agent starts those tables will be re-extracted. Upon completion of extract of data, the "Last Extract" field will get updated with a date stamp.

    To set up a Daily Extract:

    • On "Extract Maintenance" select the entity

    • Tick the "daily extract" box for the entity and save

    After the set up when the agent runs for the first time, the Live box will be ticked automatically, and the "Last Extract" field will get a date stamp.

  • Scheduled Refresh - Based on dashboard/reporting requirements, extract of data can be scheduled to run periodically, e.g., per week, per month, etc.

    Note: Scheduling extract data can only be done from "Extract Data" menu which resides in "System >> Extracts >> Extract data".

    To run a Scheduled Refresh:

    • On "Extract Data" menu, list extracts that should be scheduled to refresh

    • On the "Output options" select "scheduled" for output type and click "Next"

    • The next screen is where you enter the details of the schedule for the data extracts to be run and save


Standard Power BI Dashboards

Views (SQL)

Coins provides out of the box standard SureStart Power BI dashboards, which can be used as they are or can be adjusted based on individual business requirements and then used.

The standard dashboards are built upon standard augmented views that reside under the "Views" menu. Required views for each dashboard reside under a schema with the same name as dashboard name. These views are referred to as schema specific views; for example, "AsAtOpenItem" dashboard works on top of the views that are built under the "AsAtOpenItem" schema. Users can display and browse all the views that are used in each dashboard by filtering based on schema name.

There are some views that reside under the "Generic" schema. These views have been created as generic since they are used in multiple standard Power BI dashboards.

Note: Required Generic views must be created first if a schema specific view has dependencies on these generic views. An error message will be generated if this has not been done in the right sequence, i.e., generic views first, and then schema specific views.

Note: Before creating views, all required entities first must be generated and their corresponding SQL tables must be created, otherwise an error will be issued which lists all missing tables for creation of the view.

Users can browse the views' data in ERP+, by clicking on the "Editor" button that is available next to each view.

Reports

Standard Power BI dashboards that have been developed can be downloaded from Reports on the menu. These dashboards are built on standard views and can be used as they are. Alternatively, they can be used as a starting point and more measures can be added to them by a qualified Power BI developer.

Required dashboard specific views must have been created before the dashboard can be used.


Embedded Power BI

A "Dedicated Capacity" in "PowerBI.com" is required to embed a Power BI report or dashboard to Coins. A workspace with a diamond next to it is considered as a workspace with dedicated capacity (). Once a Power BI dashboard is published to a dedicated workspace it can then be embedded to Coins.

Required System Parameters

Extract system parameters that control embedding reports and dashboards to Coins must be set. These parameters are:

PBIAPPID = This is the App ID defined for Embedding Power BI defined in Azure Active Directory. Format of this ID is xxxx-xxxx-xxxx-xxxxxxxxxxxx.

PBIAUTHEXP = This value is filled automatically when Power BI Embedded API authorizes with Azure Active Directory.

PBIAUTHHEAD = This is http header parameters that should be sent to the authorisation end point. It has got the fixed value of "-F redirect_uri=urn:ietf:wg:oauth:2.0:oob -F scope=openid -F grant_type=password"

PBIAUTHPWD (not in use any longer) = The Azure Active Directory password for the API user for embedded Power BI dashboards and reports.

PBIAUTHRES = The Power BI resource URL. Should be set to "https://analysis.windows.net/powerbi/api"

PBIAUTHURL = The Power BI access token URL. Should be set to "https://login.windows.net/common/oauth2/token"

PBIAUTHUSER = Set to the user ID of the API user that will embed Power BI to Coins. This user must be setup with the correct permissions to access Power BI.

This user must have a Power BI Pro licence assigned to it and must be an owner or member of the workspaces in Power BI that you wish to embed.

PBIURL = The URL of the Power BI API. Should be set to https://api.powerbi.com/v1.0/myorg/

System Configuration

After setting the required system parameters, to embed content in ERP+ a function should be created which uses page section %WSYPBI. This is to enable display of the embedded dashboard in a:

  • Menu

  • Tile

  • Desktop

Function parameters must be set correctly:

  • tileheight=800 The height of the tile if launched as a desktop frame

  • stn_code= Name of the page for opening the embedded report

  • workspace= Name or ID of the dedicated workspace

  • report= Name or ID of the report / Dashboard embedded to Coins

  • User=Y This is to pass the credential of a user logged in Coins

Note: if function parameter "User=Y" is excluded from the list of parameters, the user security will not be applied to the dashboard.

  • The report above is an example of embedding a report with no user security added. This means that users will see all the information available in a dashboard which will be executed as the SY/PBIAUTHUSER and not the user logged in to Coins.

Using workspace and report ID whilst less easy to use does hide information about the workspaces and reports from inquisitive (and technically competent) users.

The stn_code referenced in the above function needs to have a header program of "sywpbi.p" and a header form defined. A standard one of %WSYPBI is available to use.

Additional function parameters should be set to apply row level security to an embedded Power BI report.

Note: The Power BI report/dashboard must have a row level security role/filter to match the function parameters specified.

  • User: Should be set to Y to include the extra user identity parameters

  • Import: Should be set to Y if ALL data in the report or dashboard is imported into PowerBI. Leave it blank for Direct query and composite data models. Setting this will determine if the SQL access token for the API user should be passed to the embed or not. It must only be passed if the content requires to access the data source. If passed to content that does not require data source access, then an error will result.

  • Roles: By default, a single role of "user" is passed to the embedded content.

    • If required, specify roles that should be passed to the embedded content.

    • Roles=user is equivalent to the default setting. This might be useful in situations where something other than user determines the data to be exposed.

  • Context: By default, the user identity passed to the embedded content is <USER>|<KCO>|<Context>. <USER> and <KCO> are determined by the logged in user and company. Context can be set in the function. For example, it would be possible to pass a contract number context={job_num}. The {} will be replaced with data in the context of the page. This would allow a contract specific dashboard to be produced. More typically user specific and/or company specific content would be created. The USERNAME() function must be used in PowerBI in the row level security role/filter and the user, kco and context must be split out from the passed string as required.


Appendix A

Payroll Entities

The following Payroll (PR) entities are available and can be utilized based on the specific payroll setup—Weekly, Monthly, Bi-weekly (2-weekly), or Four-weekly cycles. These entities are designed to align with various payroll frequencies, allowing for more accurate and flexible payroll data processing.

It is recommended to use these new Payroll entities and for refreshing them client should use extract data instead of enabling CDC.

Once Payroll process is completed and the batches are posted, appropriate payroll entities should be extracted. A date range depending on the type of entity, e.g. weekly, monthly, etc. should be entered to extract data for these entities. Leaving the dates blank will result in extracting all data.

Entering a date range at the time of running extract process, first will retrieve the data for the specified date range from ERP+ database, then delete any existing records for the specified dates from SQL table and finally Insert the newly extracted data.

This ensures that the payroll data remains accurate and up to date for the selected period.

The result is a user-controlled, date-based extract that can be run after finishing each payroll process or, as and when it is needed, providing flexibility and control over the data extraction process.


Appendix B

Non-Live Environments Refresh Guidance

When planning a refresh of a non-live environment that has Analytics enabled, please consider the following key steps:

A. Ensure the param.ini is updated for all non-live environment.

B. Non-Live environment has already analytics set up:

  1. Take a back-up of LIVE SQL Database at the same time as Backing up ERP+ Database. The SQL backup can be done within the time that ERP+ database is shut down and is ready to be backed up. Once the backup of both databases is ready, restore the SQL backup to the non-live SQL database so that both ERP+ and SQL databases are in sync.

  2. Following System parameters should be changed to have correct value for the non-live environment: EXWHDEST, EXWHURL, EXWHKEY and EXWHSEC

  3. Compile all the datasets.

  4. Finally restart the Extract agents.

If a client does not take step B.1 above, the data in SQL database will not reconcile with data in ERP+. It will therefore require re-extraction of all entities to reconcile both databases.

Did this answer your question?