Showing posts with label Oracle Cloud Technical. Show all posts
Showing posts with label Oracle Cloud Technical. Show all posts

Wednesday, 13 May 2026

Oracle Payroll Integration with UKG: Calculation Card vs Element Entry — Understanding the Correct HDL Object

Oracle Payroll Integration with UKG: Calculation Card vs Element Entry — Understanding the Correct HDL Object

Introduction

When integrating UKG, or any external time system, with Oracle Cloud HCM Payroll, a common design question is:

Should time be loaded as Element Entries, or should we use PayrollTimeCard / PayrollAbsenceRecord HDL objects?

This blog clarifies the relationship between PayrollTimeCard.dat, PayrollAbsenceRecord.dat, ElementEntry.dat, and Calculation Cards, and explains the recommended design approach for Oracle Payroll integrations.


The Core Design Decision

There are two primary integration patterns for third-party time and absence data:

  1. PayrollTimeCard.dat / PayrollAbsenceRecord.dat — time/absence-driven model
  2. ElementEntry.dat — payroll-result-driven model

Calculation Cards should not be treated as a separate third option. They are the payroll representation layer where time and absence data may appear after being loaded through the correct business objects.


Key Clarification: Calculation Cards Are the Target Representation

The most important clarification is:

PayrollTimeCard.dat and PayrollAbsenceRecord.dat both populate Calculation Cards, but you should not use CalculationCard.dat directly to load time or absence data.

In other words, Oracle may create or update Calculation Cards as part of the payroll time or absence load process, but the integration should use the purpose-built HDL objects.

What This Means in Practice

  • PayrollTimeCard.dat creates or updates Calculation Cards with time entries.
  • PayrollAbsenceRecord.dat creates or updates Calculation Entries for absences.
  • CalculationCard.dat should not be used for time or absence integration.

How Oracle Designed This

Oracle separates the object you load from where the data appears in payroll.

HDL Object Purpose Where Data Appears
PayrollTimeCard.dat Time-card data Calculation Cards / Calculation Entries
PayrollAbsenceRecord.dat Absence data Calculation Entries
ElementEntry.dat Payroll-ready inputs Element Entries
CalculationCard.dat Generic calculation card object Not recommended for time/absence loading

Pattern 1: PayrollTimeCard.dat — Recommended for Time

When to Use

Use PayrollTimeCard.dat when UKG is the system of record for approved time, but Oracle Payroll still needs to process the time data.

This approach is useful when Oracle Payroll needs to:

  • Calculate overtime
  • Apply premiums
  • Handle costing
  • Apply jurisdiction-specific payroll logic
  • Preserve work-date level detail

What Happens

  1. UKG sends approved time-card data.
  2. PayrollTimeCard.dat is loaded.
  3. Oracle validates the time-card elements.
  4. Oracle creates or updates Calculation Cards.
  5. Payroll processes the time entries.

Pattern 2: PayrollAbsenceRecord.dat — Recommended for Absences

When to Use

Use AbsenceEntry.dat when UKG or another external system sends approved absence data and you want Oracle Payroll to process absence-related payroll logic.

What Happens

  • Absence records are loaded using PayrollAbsenceRecord.dat.
  • Oracle creates the related calculation entries.
  • Payroll processes the absence-related elements.

Pattern 3: ElementEntry.dat — Payroll-Ready Inputs

When to Use

Use ElementEntry.dat when UKG has already calculated the payroll values and Oracle Payroll only needs to consume the final results.

This may include:

  • Regular pay
  • Overtime
  • Premiums
  • Adjustments
  • Payroll-ready hours or amounts

This pattern can be simpler, but it places more responsibility on the integration for validation, correction handling, duplicate prevention, and reconciliation.


Why Not Use CalculationCard.dat?

Although PayrollTimeCard.dat and PayrollAbsenceRecord.dat populate Calculation Cards, you should not use CalculationCard.dat directly for time or absence loads.

Using CalculationCard.dat for time or absence can create issues because:

  • It bypasses purpose-built validation logic.
  • It does not align with Oracle’s recommended payroll data model.
  • It can make reconciliation and maintenance more difficult.

The Real Relationship

The correct way to think about this is:

PayrollTimeCard.dat and PayrollAbsenceRecord.dat are calculation-card-based integrations, but they are implemented through specialized business objects, not through CalculationCard.dat.


Benefits of PayrollTimeCard.dat and PayrollAbsenceRecord.dat

1. Data Model Alignment

  • Matches Oracle Payroll architecture
  • Preserves time and absence granularity

2. Built-In Validation

  • Supports element eligibility validation
  • Provides structured error handling

3. Better Reconciliation

  • Aligns source system data with payroll processing
  • Supports audit at a detailed level

Challenges to Plan For

1. Source Key Strategy

A strong source key strategy is required for updates, corrections, and reconciliation.

2. Correction Model

Corrections should generally be made in the source system and re-imported, rather than manually adjusted in payroll.

3. Visibility

  • Data appears in Calculation Cards.
  • It does not appear in Oracle Time and Labor time cards.

4. Setup Requirements

The design requires proper configuration of:

  • Payroll elements
  • Element eligibility
  • Related generated elements where applicable

Practical Recommendation

Use PayrollTimeCard.dat when:

  • You want time-driven payroll processing.
  • You need detailed audit and reconciliation.
  • You want Oracle Payroll to process time-card-style data.

Use PayrollAbsenceRecord.dat when:

  • You are integrating approved absence data.
  • You want Oracle Payroll to process absence-related payroll logic.

Use ElementEntry.dat when:

  • You receive payroll-ready values from UKG.
  • UKG has already calculated the final payable hours or amounts.

Avoid:

  • Using CalculationCard.dat directly for time or absence data.

Conclusion

The confusion around Calculation Cards usually comes from treating them as an integration option rather than understanding what they represent.

Calculation Cards are the internal payroll representation of time and absence data, but the correct load mechanism is through the appropriate HDL business object.

Final Takeaway

  • PayrollTimeCard.dat and PayrollAbsenceRecord.dat do populate Calculation Cards.
  • You should not use CalculationCard.dat directly for time or absence loads.
  • ElementEntry.dat remains valid for payroll-ready inputs.

Using the correct HDL object ensures that the integration is aligned with Oracle’s payroll architecture, easier to reconcile, and more scalable for long-term maintenance.

Monday, 23 March 2026

Loading Oracle Cloud Time and Labor Time Entries via REST API using PL/SQL (UTL_HTTP)

Loading Oracle Cloud Time and Labor Time Entries via REST API using PL/SQL (UTL_HTTP)

Oracle Cloud HCM Time and Labor integrations are commonly built using Python, OCI Integration, or other middleware tools. However, there are situations where a lightweight, database-driven approach is useful, especially for rapid prototyping, controlled batch processing, or small-volume integrations.

In this example, the REST API is invoked directly from a local Oracle Express Edition database running on a personal system. The solution uses a custom staging table, PL/SQL procedures, UTL_HTTP, and an Oracle wallet for secure HTTPS communication. The example focuses on loading time entries through the REST API flow and not a separate time event integration model.

Important Note: This REST API supports creation of complete time entries that include both start time and stop time (or measured duration). It leverages the same configuration used for standard time entry in Oracle Time and Labor and does not require any additional setup.

However, if the business requirement involves capturing partial time events (such as only start time or only stop time), a different REST endpoint (/hcmRestApi/resources/11.13.18.05/timeRecordEventRequests/) must be used. This approach requires additional configurations, including event definitions and device-related attributes.

This approach is particularly useful for

  • One time – time entries load for payroll parallel run testing, time parallel run testing
  • Rapid prototyping
  • Small-scale integrations
  • Controlled batch processing from staging tables
  • Enter the time entries in custom application/UI and then load into OTL with more controlled way.

Solution Overview

The design follows a simple but effective pattern:

  1. Insert time entry records into a custom staging table.
  2. Read unprocessed rows from the staging table.
  3. Construct the REST payload in PL/SQL.
  4. Call Oracle Cloud HCM REST API using UTL_HTTP.
  5. Capture response details and update the processing status.

Step 1: Create the staging table

The staging table stores inbound time entry data and maintains processing metadata such as status, request identifiers, API response, and error message.

CREATE TABLE xx_time_entry_stg (
    id                     NUMBER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
    reporter_id            VARCHAR2(100)   NOT NULL,
    start_time             VARCHAR2(50)    NOT NULL,
    stop_time              VARCHAR2(50)    NOT NULL,
    payroll_time_type      VARCHAR2(200)   NOT NULL,
    comment_txt            VARCHAR2(1000),
    process_status         VARCHAR2(30)    DEFAULT 'NEW',
    request_id             NUMBER,
    event_id               NUMBER,
    response_clob          CLOB,
    error_message          VARCHAR2(4000),
    created_on             DATE            DEFAULT SYSDATE,
    updated_on             DATE
);

This structure provides a strong foundation for traceability and troubleshooting:

  • process_status tracks whether a row is NEW, PROCESSING, SUCCESS, or ERROR.
  • response_clob stores the full REST API response for audit and debugging.
  • error_message stores a summarized failure reason.
  • request_id and event_id help connect the database record to the HCM transaction.

Step 2: Insert sample time entry data

A sample record can be inserted into the staging table before processing.In this example we are using 4 days’ time entries for 1 employee.

INSERT INTO xx_time_entry_stg (
    reporter_id,
    start_time,
    stop_time,
    payroll_time_type,
    comment_txt
) VALUES (
    '1129',
    '2026-03-16T08:00:00.000-04:00',
    '2026-03-16T17:00:00.000-04:00',
    'XX Regular Hours US',
    'Loaded from custom table-1'
);
INSERT INTO xx_time_entry_stg (
    reporter_id,
    start_time,
    stop_time,
    payroll_time_type,
    comment_txt
) VALUES (
    '1129',
    '2026-03-17T08:00:00.000-04:00',
    '2026-03-17T17:00:00.000-04:00',
    'XX Regular Hours US',
    'Loaded from custom table-1'
);
INSERT INTO xx_time_entry_stg (
    reporter_id,
    start_time,
    stop_time,
    payroll_time_type,
    comment_txt
) VALUES (
    '1129',
    '2026-03-18T08:00:00.000-04:00',
    '2026-03-18T17:00:00.000-04:00',
    'XX Regular Hours US',
    'Loaded from custom table-1'
);
INSERT INTO xx_time_entry_stg (
    reporter_id,
    start_time,
    stop_time,
    payroll_time_type,
    comment_txt
) VALUES (
    '1129',
    '2026-03-19T08:00:00.000-04:00',
    '2026-03-19T17:00:00.000-04:00',
    'XX Regular Hours US',
    'Loaded from custom table-1'
);
COMMIT;

The timestamp values are provided in ISO 8601 format with timezone offset, which is important for Oracle Cloud HCM REST requests.

Step 3: Build the PL/SQL procedure that loads one time entry

The main procedure reads a staging row, constructs the JSON payload, invokes the REST API, reads the response, and updates the row status.

CREATE OR REPLACE PROCEDURE xx_load_one_time_entry (
    p_id                IN NUMBER,
    p_host              IN VARCHAR2,
    p_username          IN VARCHAR2,
    p_password          IN VARCHAR2,
    p_wallet_path       IN VARCHAR2,
    p_wallet_password   IN VARCHAR2
) AS
    l_url              VARCHAR2(2000);
    l_req              UTL_HTTP.req;
    l_resp             UTL_HTTP.resp;
    l_buffer           VARCHAR2(32767);
    l_response_clob    CLOB;
    l_payload          CLOB;

    l_reporter_id      xx_time_entry_stg.reporter_id%TYPE;
    l_start_time       xx_time_entry_stg.start_time%TYPE;
    l_stop_time        xx_time_entry_stg.stop_time%TYPE;
    l_payroll_type     xx_time_entry_stg.payroll_time_type%TYPE;
    l_comment_txt      xx_time_entry_stg.comment_txt%TYPE;

    l_request_id       NUMBER;
    l_event_id         NUMBER;
    l_event_status     VARCHAR2(100);
    l_status_code      PLS_INTEGER;
    l_reason_phrase    VARCHAR2(256);
    l_error_message    VARCHAR2(4000);
BEGIN
    SELECT reporter_id,
           start_time,
           stop_time,
           payroll_time_type,
           NVL(comment_txt, 'Loaded from custom table')
    INTO   l_reporter_id,
           l_start_time,
           l_stop_time,
           l_payroll_type,
           l_comment_txt
    FROM   xx_time_entry_stg
    WHERE  id = p_id
    FOR UPDATE;

    UPDATE xx_time_entry_stg
       SET process_status = 'PROCESSING',
           updated_on     = SYSDATE,
           error_message  = NULL
     WHERE id = p_id;

    l_url := 'https://' || p_host || '/hcmRestApi/resources/11.13.18.05/timeRecordEventRequests';

    l_payload :=
           '{'
        || '\"processInline\":\"Y\",'
        || '\"processMode\":\"TIME_SAVE\",'
        || '\"timeRecordEvent\":['
        ||   '{'
        ||     '\"startTime\":\"' || REPLACE(REPLACE(l_start_time, '\', '\\'), '\"', '\\\"') || '\",'
        ||     '\"stopTime\":\"' || REPLACE(REPLACE(l_stop_time, '\', '\\'), '\"', '\\\"') || '\",'
        ||     '\"reporterIdType\":\"PERSON\",'
        ||     '\"reporterId\":\"' || REPLACE(REPLACE(l_reporter_id, '\', '\\'), '\"', '\\\"') || '\",'
        ||     '\"comment\":\"' || REPLACE(REPLACE(l_comment_txt, '\', '\\'), '\"', '\\\"') || '\",'
        ||     '\"operationType\":\"ADD\",'
        ||     '\"timeRecordEventAttribute\":['
        ||       '{'
        ||         '\"attributeName\":\"PayrollTimeType\",'
        ||         '\"attributeValue\":\"' || REPLACE(REPLACE(l_payroll_type, '\', '\\'), '\"', '\\\"') || '\"'
        ||       '}'
        ||     ']'
        ||   '}'
        || ']'
        || '}';

    UTL_HTTP.set_detailed_excp_support(TRUE);
    UTL_HTTP.set_response_error_check(FALSE);
    UTL_HTTP.set_wallet(p_wallet_path, p_wallet_password);

    l_req := UTL_HTTP.begin_request(l_url, 'POST', 'HTTP/1.1');

    UTL_HTTP.set_authentication(l_req, p_username, p_password, 'Basic', FALSE);
    UTL_HTTP.set_header(l_req, 'Content-Type', 'application/json');
    UTL_HTTP.set_header(l_req, 'Accept', 'application/json');
    UTL_HTTP.set_header(l_req, 'Content-Length', TO_CHAR(DBMS_LOB.getlength(l_payload)));
    UTL_HTTP.write_text(l_req, l_payload);

    l_resp := UTL_HTTP.get_response(l_req);
    l_status_code   := l_resp.status_code;
    l_reason_phrase := l_resp.reason_phrase;

    DBMS_LOB.createtemporary(l_response_clob, TRUE);

    BEGIN
        LOOP
            UTL_HTTP.read_text(l_resp, l_buffer, 32767);
            DBMS_LOB.writeappend(l_response_clob, LENGTH(l_buffer), l_buffer);
        END LOOP;
    EXCEPTION
        WHEN UTL_HTTP.end_of_body THEN
            NULL;
    END;

    UTL_HTTP.end_response(l_resp);

    BEGIN
        SELECT JSON_VALUE(l_response_clob, '$.timeRecordEventRequestId' RETURNING NUMBER),
               JSON_VALUE(l_response_clob, '$.timeRecordEvent[0].timeRecordEventId' RETURNING NUMBER),
               JSON_VALUE(l_response_clob, '$.timeRecordEvent[0].eventStatus' RETURNING VARCHAR2(100))
          INTO l_request_id, l_event_id, l_event_status
          FROM dual;
    EXCEPTION
        WHEN OTHERS THEN
            l_request_id   := NULL;
            l_event_id     := NULL;
            l_event_status := NULL;
    END;

    IF l_status_code BETWEEN 200 AND 299
       AND l_event_status = 'COMPLETE'
    THEN
        UPDATE xx_time_entry_stg
           SET process_status = 'SUCCESS',
               request_id     = l_request_id,
               event_id       = l_event_id,
               response_clob  = l_response_clob,
               updated_on     = SYSDATE,
               error_message  = NULL
         WHERE id = p_id;
    ELSE
        l_error_message :=
            SUBSTR(
                'HTTP ' || l_status_code || ' ' || l_reason_phrase || ' | ' ||
                DBMS_LOB.SUBSTR(l_response_clob, 3000, 1),
                1,
                4000
            );

        UPDATE xx_time_entry_stg
           SET process_status = 'ERROR',
               request_id     = l_request_id,
               event_id       = l_event_id,
               response_clob  = l_response_clob,
               updated_on     = SYSDATE,
               error_message  = l_error_message
         WHERE id = p_id;
    END IF;

    COMMIT;

EXCEPTION
    WHEN OTHERS THEN
        BEGIN
            UTL_HTTP.end_response(l_resp);
        EXCEPTION
            WHEN OTHERS THEN
                NULL;
        END;

        l_error_message :=
            SUBSTR(
                SQLERRM || ' | ' || UTL_HTTP.get_detailed_sqlerrm,
                1,
                4000
            );

        UPDATE xx_time_entry_stg
           SET process_status = 'ERROR',
               updated_on     = SYSDATE,
               error_message  = l_error_message
         WHERE id = p_id;

        COMMIT;
        RAISE;
END;
/

How the procedure works

  • It selects a specific staging row by ID and locks it using FOR UPDATE.
  • It updates the row to PROCESSING before the API call starts.
  • It constructs the JSON payload dynamically in PL/SQL.
  • It uses UTL_HTTP to send the HTTPS POST request.
  • It reads the REST response into a CLOB.
  • It parses the response using JSON_VALUE.
  • It marks the row as SUCCESS only when the HTTP status is successful and the returned event status is COMPLETE.
  • Otherwise, it stores the failure details and marks the record as ERROR.

REST payload design

The payload uses processInline = "Y" and processMode = "TIME_SAVE". This is important because the integration is being used to save a time entry through the REST API processing flow. You can change processMode = "TIME_SUBMIT"or "TIME_ENTER" depending on your requirement. The payload also passes:

  • startTime - start time of the time entry
  • stopTime - stop time of the time entry
  • reporterIdType - PERSON
  • reporterId - Person Number
  • comment - any custom optional comment
  • operationType - ADD
  • PayrollTimeType - Time type defined in the layout component set(element name not the display name)

This makes the example practical for a common Time and Labor requirement where the integration needs to create a time entry with a payroll time type.

Important implementation details

1. Wallet configuration for HTTPS

The call uses UTL_HTTP.set_wallet so the local Oracle database can trust the SSL certificate presented by Oracle Cloud. Without a properly configured wallet, HTTPS requests typically fail.

2. Content-Length header

The code explicitly sets the Content-Length header using DBMS_LOB.getlength(l_payload). This is a very important detail because missing this header can result in HTTP 411 Length Required errors.

3. JSON escaping

The code carefully escapes backslashes and double quotes when building the payload. This is important because comments and other text values can break the JSON structure if they are not escaped correctly.

4. Detailed exception support

UTL_HTTP.set_detailed_excp_support(TRUE) makes troubleshooting easier by exposing more detailed network and HTTP diagnostics.

5. Response parsing

The API response is stored in a CLOB and parsed for:

  • timeRecordEventRequestId
  • timeRecordEventId
  • eventStatus

This gives the solution both technical traceability and business-level visibility into the outcome.

Step 4: Batch procedure for multiple entries

To process all new staging rows, a batch procedure loops through records with status NEW and calls the single-row loader for each entry.

CREATE OR REPLACE PROCEDURE xx_load_time_entries_batch (
    p_host              IN VARCHAR2,
    p_username          IN VARCHAR2,
    p_password          IN VARCHAR2,
    p_wallet_path       IN VARCHAR2,
    p_wallet_password   IN VARCHAR2
) AS
BEGIN
    FOR r IN (
        SELECT id
        FROM xx_time_entry_stg
        WHERE process_status = 'NEW'
        ORDER BY id
    ) LOOP
        BEGIN
            xx_load_one_time_entry(
                p_id              => r.id,
                p_host            => p_host,
                p_username        => p_username,
                p_password        => p_password,
                p_wallet_path     => p_wallet_path,
                p_wallet_password => p_wallet_password
            );
        EXCEPTION
            WHEN OTHERS THEN
                NULL;
        END;
    END LOOP;
END;
/

This design keeps the processing model simple. Each row is handled independently, which makes error isolation easier. One failed row does not stop the remaining rows from being attempted.

Step 5: Execute the batch load

The batch procedure can be executed with the Oracle Cloud host, credentials, and wallet details.

BEGIN
    xx_load_time_entries_batch(
        p_host            => 'your-test-instance.fa.us6.oraclecloud.com',
        p_username        => 'your_username',
        p_password        => 'your_password',
        p_wallet_path     => 'file:c:\wallet',
        p_wallet_password => 'your_wallet_password'
    );
END;
/

After execution, the staging table can be queried to confirm whether each record completed successfully or failed with an error.

SELECT *
FROM xx_time_entry_stg
ORDER BY id;

Testing the solution

A good test cycle for this approach is:

  1. Record inserted in staging table
  2. Run the batch procedure
  3. Status transitions:
    • NEW → PROCESSING → SUCCESS / ERROR
  4. Response captured in response_clob
  5. Review the failed records if any
  6. Validate time entries from the front-end(You don’t have to run any job/any scheduled process to see the entries in the front-end)



This gives a clean and repeatable way to validate the integration end to end.

Common issues and troubleshooting

Issue Possible Cause Typical Fix
ORA-29273 HTTP request failed Check wallet setup, ACLs, hostname, SSL trust, and network connectivity
ORA-28759 Failure to open wallet file Verify wallet path, database service account permissions, and wallet contents
HTTP 411 Length Required Missing Content-Length header Explicitly set Content-Length before writing the payload
Invalid JSON or API rejection Malformed payload or unescaped text Escape values correctly and validate the payload structure

Strengths of this pattern

  • Simple and easy to understand
  • Fully database-driven
  • Good audit trail through staging and stored responses
  • Useful for low-volume or controlled integrations
  • No separate middleware is required for the basic flow

Practical limitations

  • It is not the best pattern for very high transaction volumes.
  • Basic authentication may not be the preferred production-grade option in all environments.
  • Sequential row-by-row processing can become a bottleneck at scale.
  • Credential handling must be hardened before production use.

Recommendations for production hardening

  • Move credentials out of anonymous execution blocks and store them securely.
  • Add a retry strategy for transient failures.
  • Introduce better logging for request payloads, response codes, and execution timestamps.
  • Add validation logic before calling the REST API.
  • Consider parallel processing for larger data volumes.
  • Review whether OAuth-based authentication is required in the target environment.

Key takeaway

This example shows how Oracle Database can act as a lightweight integration layer for Oracle Cloud HCM Time and Labor. By combining a staging table, PL/SQL, UTL_HTTP, and wallet-based HTTPS configuration, it is possible to load time entries directly from a local Oracle XE environment while still preserving auditability and control.

For teams that want a compact and database-centric integration option, this is a practical pattern worth considering.

Disclaimer

This example is intended for demonstration and learning purposes. Additional controls, security hardening, validation, and operational considerations may be required before adopting the pattern in a production environment.

Thursday, 28 January 2021

SQL Query to Get Database Items and context

 SELECT DBI.USER_NAME,BASE_CONTEXT_NAME

FROM FF_DATABASE_ITEMS_VL DBI,

FF_USER_ENTITIES_VL USERENT,

FF_ROUTES_VL ROUTES,

FF_ROUTE_CONTEXT_USAGES RCU,

FF_CONTEXTS_VL CON

Where  DBI.USER_ENTITY_ID = USERENT.USER_ENTITY_ID

AND USERENT.ROUTE_ID = ROUTES.ROUTE_ID

AND DBI.base_USER_name = <DBI Name>

AND ROUTES.ROUTE_ID = RCU.ROUTE_ID

AND CON.CONTEXT_ID = RCU.CONTEXT_ID

==========================================================

select FF.FORMULA_NAME, BASE_CONTEXT_NAME

ff_contexts_vl CON,

ff_ftype_context_usages FCU,

FF_FORMULA_TYPES_VL FT

WHERE FF.formula_type_id = FT.formula_type_id

AND   FCU.FORMULA_TYPE_ID =  FT.formula_type_id

AND   CON.CONTEXT_ID = FCU.CONTEXT_ID

AND   BASE_FORMULA_NAME LIKE <Formula Bas Name>

Wednesday, 24 January 2018

Payroll Batch Loader Error Query



select ml.*
from pay_bl_message_lines_vl ml
,    pay_batch_lines bl
,    pay_batch_headers bh
where bh.batch_id = bl.batch_id
and   bl.batch_line_id = ml.source_id
and   bh.batch_name = 'BATCH NAME'
and   bl.batch_line_status = 'E';


Full Query for all PBL errors with batch name, task name, task action name, assignment number and error message


SELECT pbh.batch_name,
       pbh.legislative_data_group_id,pbt.display_task_name,pbta.display_task_action_name,
pivot_line_value_data.action_parameter_name1,pivot_line_value_data.action_parameter_value1,pivot_line_value_data.action_parameter_name2,pivot_line_value_data.action_parameter_value2,
       pbmlv.message_text
FROM
(SELECT batch_line_id,
     MAX(decode(rn,1,display_action_parameter_name))
action_parameter_name1,MAX(decode(rn,1,action_parameter_value))
action_parameter_value1,
     MAX(decode(rn,2,display_action_parameter_name))
action_parameter_name2,MAX(decode(rn,2,action_parameter_value))
action_parameter_value2,
     MAX(decode(rn,3,display_action_parameter_name))
action_parameter_name3,MAX(decode(rn,3,action_parameter_value))
action_parameter_value3,
    MAX(decode(rn,4,display_action_parameter_name))
action_parameter_name4,MAX(decode(rn,4,action_parameter_value))
action_parameter_value4
 FROM ( SELECT batch_line_id,
               action_parameter_name,
       action_parameter_value,
       display_action_parameter_name,
               row_number() over (partition by batch_line_id
                          order by batch_line_id,
  action_parameter_name ) rn
          FROM (SELECT pbavl.element_name action_parameter_name,
               pbavl.parameter_name display_action_parameter_name,
       pblv.action_parameter_value,
       pblv.batch_line_id
FROM pay_task_parameters_vl pbavl,  pay_batch_line_values pblv
WHERE pbavl.base_task_parameter_id = pblv.action_parameter_id
AND pbavl.display_flag <> 'N' ) 
          )
GROUP BY batch_line_id) pivot_line_value_data,
                        pay_batch_lines pbl ,
                  pay_batch_headers pbh,
                        pay_bl_message_lines_vl pbmlv,
PAY_BL_TASK_ACTIONS_VL pbta,PAY_BL_TASKS_VL pbt
WHERE pivot_line_value_data.batch_line_id = pbl.batch_line_id
AND pbl.batch_id = pbh.batch_id
AND pbl.batch_line_id = pbmlv.source_id
AND pbl.batch_line_status = 'E'
AND pbh.batch_name like '%Pay%%%'
AND pbt.task_id=pbta.task_id
and pbl.task_action_id=pbta.task_action_id