Many industries are regulated and have to adhere to a set of guidelines and industry standards. While we are a software company, we have been learning more about these standards and how our software is helping the financial industry adhere to the regulations.

Two of the standards we have been hearing about in the recent past are mentioned below:

      • The Systems and Organizational Controls (SOC) 2
      • Basel Committee on Banking Supervision’s standard number 239

The Systems and Organizational Controls (SOC) 2,

A comprehensive reporting framework was set forth by the American Institute of Chartered Public Accountants (ACPA). As part of this framework, organizations should engage independent auditors to execute a series of assessments to test the controls related to Trust Services Criteria (TSC) such as Security, Availability, Processing Integrity, Confidentiality, or Privacy.

From our perspective at Datagaps, achieving Processing Integrity is where we can add value and thus explaining our understanding below.

The Processing Integrity component has 6 criteria.

    • Procedures exist to prevent, or detect and correct, processing errors to meet the entity’s processing integrity commitments and system requirements.
    • System inputs are measured and recorded completely, accurately, and time to meet the entity’s processing integrity commitments and system requirements.
    • Data is processed completely, accurately, and timely as authorized to meet the entity’s processing integrity commitments and system requirements.
    • Data is stored and maintained completely, accurately, and in a timely manner for its specified life span to meet the entity’s processing integrity commitments and system requirements.
    • System output is complete, accurate, and distributed to meet the entity’s processing integrity commitments and system requirements.
    • Modification of data, other than routine transaction processing, is authorized and processed to meet the entity’s processing integrity commitments and system requirements.

After the Global Financial Crisis in 2007-2008, one of the key lessons learned was that the Data Management frameworks and IT architectures in place at many banks at that time were inadequate to support and report risks associated with global banking. In lieu of that, the Basel Committee on Banking Supervision’s standard number 239 was issued as principles for effective risk data aggregation and Risk Reporting.

These are 14 principles that were laid out to strengthen banks’ risk data aggregation capabilities and internal risk reporting practices and they fall under the following categories.

    • Overarching Governance and infrastructurePrinciple 1 & 2
    • Risk Data Aggregation Capabilities – Principles 3 through 6

In this blog, we will focus on few aspects of Principles 3 and 7 since they are the areas which are relevant from our perspective at Datagaps. If you want a complete description of the principles, please refer to this link.

Accuracy and Integrity A bank should be able to generate accurate and reliable risk data to meet normal and stress/crisis reporting accuracy requirements. Data should be aggregated on a largely automated basis so as to minimize the probability of errors.

  • Where a bank relies on manual processes and desktop applications (eg
    spreadsheets, databases) and has specific risk units that use these applications for software development, it should have effective mitigants in place (eg end-user computing policies and procedures) and other effective controls that are consistently applied across the bank’s processes.
  • Risk data should be reconciled with the bank’s sources, including accounting data
    where appropriate, to ensure that the risk data is accurate
  • There should be an appropriate balance between automated and manual systems. Where professional judgments are required, human intervention may be appropriate. A higher degree of automation is desirable to reduce the risk of errors.

    Accuracy: Risk management reports should accurately and precisely convey
    aggregated risk data and reflect risk in an exact manner. Reports should be reconciled
    and validated.

    1. Risk management reports should be accurate and precise to ensure a bank’s board and senior management can rely with confidence on the aggregated information to make critical decisions about risk.
    2. To ensure the accuracy of the reports, a bank should maintain, at a minimum, the following
        • Defined requirements and processes to reconcile reports to risk data;
        • Automated and manual edit and reasonableness checks, including an inventory of the validation rules that are applied to quantitative information. The inventory should include explanations of the conventions used to describe any mathematical or logical relationships that should be verified through these validations or checks;
        • Integrated procedures for identifying, reporting, and explaining data errors or weaknesses in data integrity via exceptions reports
        • Risk Reporting PracticesPrinciples 7 through 11
        • Supervisory Review tools and Cooperation – Principles 12 through 14

      As we are collaborating more with the financial industry, we are learning that while the guidelines are fairly clear, the actual implementation is complex and there are no clear tools and implementation strategies to help banks achieve these principles. In this blog,  we want to delve a bit deeper with a concrete example of how our software can help achieve the Process Integrity required for the audits and enable trust in enterprise data and systems.

      Consider General Ledger (GL), one of the most important elements in the Accounting world. Data is entered manually and automatically into the GL and oftentimes data is extracted from the GL into Enterprise Reporting systems or other downstream systems because the GL is a treasure trove of valuable information from Accounts Receivable, Accounts Payable, Purchasing, Payroll, and the sub-ledgers within the enterprise as shown in the image.

      If anything goes wrong with the GL, it can have huge implications within the enterprise because if GL is not trustworthy, then, the reporting will not be trustworthy as well and wrong numbers might get accidentally reported to Wall Street because which organizations have to pay huge fines to governing entities.

      Now, the question is how and where can something go wrong with the process integrity when it comes to General Ledger? To answer this question, let’s take a look at a potential file output from the GL system in Oracle. As mentioned in the Oracle Documentation,  a GL file can be imported or extracted in multiple formats through automated scheduler jobs such as AutoSys and one such format (IXGLDN) has the below columns.

       

      Field Name

      AttributesField Description/Contents
      Run groupAlphanumeric, 12 positionsComposed of CW + the date the file was populated, in CCYYMMDD format. For example, the run group for a record created on February 19, 2003 is CW20030219.
      Sequence numberNumeric, 6 positionsA unique sequence number assigned by the system to each record each time you generate the download file.
      CompanyNumeric, 4 positionsThe first 4 positions of the Cross reference number field in Working with Chart of Accounts (WCA). If the first 4 positions include a non-numeric character, such as a hyphen (-) the system skips this character and takes the first 4 numeric values. For example, if the Cross reference number is 100-10-12345, the Company will be 1000. If the Cross reference number field does not contain numeric characters, the Company field will not be populated consistently.
      Company oldAlphanumeric, 31 positionsThe first 11 positions of the Cross reference number field in Work with Chart of Accounts (fast path = WCA).
      Account number oldAlphanumeric, 31 positionsPositions 12-19 of the Cross reference number field in Work with Chart of Accounts (fast path = WCA).
      Source codeAlphanumeric, 1 positionThe Journal source field from the General Ledger Interface file, indicating the type of transaction that generated the posting. See Accounts Payable Chapter 41: Displaying the General Ledger Interface (DGLI) for a list of journal source codes.
      Transaction dateNumeric, 8 positions (CCYYMMDD format)The date this record was downloaded to the General Ledger Download file. For example, the transaction date for a record created on February 19, 1999 is 19990219.
      ReferenceAlphanumeric, 10 positionsComposed of CW + the date the download file was created. For example, the Reference for a record created on February 19, 1999 is CW19990219.
      DescriptionAlphanumeric, 30 positionsSame as the Reference field.
      Currency codeAlphanumeric, 5 positionsBlank
      Units amountNumeric, 15 positions with a 2-place decimal0.00
      Units amount signAlphanumeric, 1 positionBlank
      Transaction amountNumeric, 15 positions with a 2-place decimalThe Transaction amount field from the General Ledger Interface file.
      Transaction amount signAlphanumeric, 1 position

      + if the transaction amount is positive.

      – if the transaction amount is negative.

      Base amountNumeric, 15 positions with a 2-place decimalThe transaction amount field from the General Ledger Interface file.
      Base amount signAlphanumeric, 1 position

      + if the transaction amount is positive.

      – if the transaction amount is negative.

      Base rateNumeric, 12 positions with a 6-place decimal.000000
      SystemAlphanumeric, 2 positionsCW
      Program codeAlphanumeric, 5 positionsBlank
      Auto revAlphanumeric, 1 positionN
      Posting dateNumeric, 8 positions (CCYYMMDD format)The date of the transaction from the General Ledger Interface file, converted into 8-position format. For example, the posting date for a record from February 19, 1999 is 19990219.
      ActivityAlphanumeric, 15 positionsBlank
      Account categoryAlphanumeric, 5 positionsBlank
      Document numberAlphanumeric, 15 positionsCW + the date the download file was created + the sequence number for this record. For example, the document number for the first record created on February 19, 1999 is CW19990219000001.
      To base amountNumeric, 15 positions with a 2-place decimal1.00
      To base amount SignAlphanumeric, 1 position+
      Effect dateNumeric, 8 positions (CCYYMMDD format)0
      Journal book numberAlphanumeric, 12 positionsBlank
      MX value 1Alphanumeric, 20 positionsBlank
      MX value 2Alphanumeric, 20 positionsBlank
      MX value 3Alphanumeric, 20 positionsBlank

      Whenever the version of the software related to scheduling or ETL or GL is updated, there is a possibility for the files to not be in sync with the expected formats and thus result in wrong data sets. If testing teams are able to achieve the desired quality, there are various challenges such as 

          • While the principles recommend banks have one source of data, it is impossible in large banks because the data is distributed across multiple systems.
          • Manual intervention related to all the files involved in testing is unavoidable with the current set of tools and frameworks.
          • Lack of end-to-end automation that instills trust in Quality Assurance and Reporting. 
          • Lost productivity in setting up test files and data.

      Relying on Excel and manual eye-balling to check values and validate rules. As an example, consider the Excel file shared by the BIS as a reference to help banks adhere to the principles. It has multiple worksheets and one specifically called “Checks” that supervisors and other governance body members can use to validate the risk-related metrics. All this has to be done manually.

      In our future blogs, we will explain how DataOps Suite can help banks achieve the following benefits and adhere to the principles laid out by organizations such as AICPA, BIS, etc. 

          • Elimination of Manual Intervention
          • End to end automation of test scripts
          • Extremely high performance to test billions of records in little time.
          • Integrated scheduling

      References:

      Referral Link 1 

      Referral Link 2

      Referral Link 3

      Datagaps

      Established in the year 2010 with the mission of building trust in enterprise data & reports. Datagaps provides software for ETL Data Automation, Data Synchronization, Data Quality, Data Transformation, Test Data Generation, & BI Test Automation. An innovative company focused on providing the highest customer satisfaction. We are passionate about data-driven test automation. Our flagship solutions, ETL Validator, Data Flow and BI Validator are designed to help customers automate the testing of ETL, BI, Database, Data Lake, Flat File, & XML Data Sources. Our tools support Snowflake, Tableau, Amazon Redshift, Oracle Analytics, Salesforce, Microsoft Power BI, Azure Synapse, SAP BusinessObjects, IBM Cognos, etc., data warehousing projects, and BI platforms.

       

      Queries: contact@datagaps.com