Quantcast
Channel: Apps2Fusion Articles
Viewing all 930 articles
Browse latest View live

Oracle Fusion Payroll - Flow Patterns (Part 1)

$
0
0

This tutorial deals with Fusion Payroll, and how it functions. In this part, we will discuss about Flow Patterns.

Flow Patterns

The concept of Flow Patterns is a new one in Fusion Payroll. It is an orchestration of the Payroll Cycle with the Payroll Tasks. Payroll Tasks can be manual interventions, processes, reports, etc.

A Payroll Flow Pattern can run as a single automatic task or as a pattern with many tasks. Basically, a flow pattern stitches payroll tasks together with the payroll cycle.

There are seeded flow patterns, which typically include critical processes like Payroll Calculations, Prepayments, and other Payroll Processes. User-created flow patterns can be created either from scratch (without copy) or by copying an already existing flow pattern (with copy).

There are three components that are part of a flow pattern: Tasks (the Payroll Tasks), Flow Tasks (how the flow pattern utilises the tasks), and Flows (the flows to which the flow tasks are stitched).

Once a flow pattern is created (either seeded or user-created), it is run through Flow Submission. In this UI, the required flow pattern is selected, the inputs are given, and after submission, the flow pattern is then executed.

The task associated with it is Manage Payroll Flow Patterns.

Tasks

Tasks are the building blocks for the creation of flows. All tasks are seeded , be it Payroll Calculation, Rollback, or any other payroll task. Users cannot create their own tasks, as they are provided by the legislation or Global Payroll.

A task can be any one of the following types: Process, Manual Intervention, Services, PL/SQL, or Reports. These tasks are made use of in Flows and Batch Loaders.

There are two components to a task: Actions (that are to be executed in the task) and Action Parameters (the inputs and outputs to the task).

The primary Process Task Actions that are used in Payroll are: Submit, Mark for Retry, Retry, and Rollback. These are internal to the payroll processes.

The Report Task Actions consist of only the Submit action.

Flow Tasks

Flow Tasks are created when a Flow Pattern is created from Tasks.

As with Payroll Tasks, the components of a flow task are: Actions and Action Parameters.

There are two types of Flow Tasks that a flow pattern would have: Start Task and End Task. The Start Task instructs the flow about the actions to be performed at the start of the flow, while the End Task instructs the flow when to stop.

Flows

The flow patterns orchestrate the Payroll Cycle utilising tasks as Flow Tasks according to the requirements of the user.

Flow patterns have to be sequenced, so that the flows know on which order to execute the flow tasks. The sequencing is highly flexible - it can be changed as per the user’s requirements; it also has the ability to execute tasks in parallel - meaning that multiple tasks can be executed simultaneously. Once this parallel execution of tasks is done, the flow will move on to the next task in the sequence.

In case any errors are present in the tasks that are performed, the flow will stop its execution at that particular point.

The components of a flow pattern are:

  • Flow Tasks - the tasks under the flow pattern.

  • Flow Task Parameters - the parameters to the flow tasks.

  • Flow Parameters - a subset of the flow task parameters; values are passed from the flow parameters to the flow task parameters.

  • Flow Sequence - a critical component; specifies the order in which the tasks are executed in the flow.

Fig. 1 - A Flow Sequence. 1-7 are tasks to be executed

The above diagram depicts a flow sequence, where tasks 1 through 7 are to be carried out. Once Task 1 is completed, Task 2 is executed. After the execution of Task 2, Tasks 3, 4, and 5 are simultaneously carried out (parallel execution). Once Tasks 3, 4, and 5 are done, Task 6 is executed, followed by Task 7.

The Legislative Data Group is optional for flow patterns. This is because the flow patterns are utilised not only by Global Payroll, but by other Human Capitalisation Management (HCM) modules like HCM Extract.

Flow Parameters

The following flowchart depicts the decision of selecting the flow parameters:

Fig. 2 - The selection process of Flow Parameters

Flow Parameters have the following critical aspects to them:

  • Display and display format - how it is to be displayed to the user (e.g. text, date, etc.).

  • Lookups and value sets

  • Usage - whether the parameter is an input, output, or both.

  • Sequence - the order of execution.

  • Parameter Basis and Basis Value - how the parameter is utilised and its value.

Flow Pattern Submission

Once a flow pattern is created, it has to be submitted. The following steps depict this process:

  1. For a flow pattern consisting of a single task, the task associated with it is Submit a Payroll Process or Report.
    If the flow pattern consists of more than one task, the task associated with it is Submit a Payroll Flow.

  2. After submission of the flow pattern, the user gets a screen where he has to enter the values for the flow parameters.

  3. Interactions with other flow patterns (if any).

  4. Schedule for the flow (when to be executed, repetitions if any, etc.).


Oracle Fusion Payroll - Flow Patterns (Part 2)

$
0
0

This tutorial deals with Fusion Payroll, and how it functions. In this part, we will discuss about the UIs related to Flow Patterns.

Flow Patterns

As discussed in the previous part of the tutorial, Flow Patterns are a new concept in Fusion Payroll. It is an orchestration of the Payroll Cycle with the Payroll Tasks. A Payroll Flow Pattern can run as a single automatic task or as a pattern with many tasks. Basically, a flow pattern stitches payroll tasks together with the payroll cycle.

The task associated with it is Manage Payroll Flow Patterns.

To view and edit flow patterns, the following steps are to be followed:

  1. Go to the Manage Payroll Flow Patterns task under Define Payroll Flow Patterns.

  2. Select the required Legislative Data Group and click on the Search button.

  3. From the search results, click on the name of a flow pattern to view its details.

  4. Click on the tabs Tasks, Task Sequence, and Parameters to view the respective details for that flow pattern.

  5. The Tasks tab will show a list of tasks that are under the flow pattern.

  6. The Task Sequence tab shows the sequence in which the tasks are carried out. The Following Task column determines the next task to be executed. Start Flow is the first task to be executed, while End Flow is the last task to be executed (see the screenshot below for better understanding).

  7. The Parameters tab shows a list of Flow Parameters for the flow pattern. Here, the Display, Display Format, Lookup, Usage, etc. for each parameter are defined.

Fig. 1 - The Manage Payroll Flow Patterns task

Fig. 2 - The Manage Payroll Flow Patterns page

Fig. 3 - List of tasks in the ‘End of Year’ flow pattern

Fig. 4 - The Task Sequence of the flow pattern. Numbers indicate the sequence followed for the tasks

Fig. 5 - Flow Parameters in the flow pattern

To copy a seeded flow pattern, follow the below steps:

  1. Go to the Manage Payroll Flow Patterns task under Define Payroll Flow Patterns.

  2. Click on the Copy icon (circled in the screenshot below).

  3. Enter the name of the flow pattern, select the flow pattern to be copied, and the Legislative Data Group from the dropdown. Click on the Save and Close button.

  4. Click on the Search button to view the created flow pattern in the search results.

  5. Click on the name of the copied flow pattern.

  6. To edit the details, click on the Edit button on the top right-hand corner.

  7. Select a task to be edited and click on the Edit icon (circled in the screenshot below). Then, click on the pencil icon under Edit Task to edit the task.

  8. To edit the parameters of the flow task, select a parameter from the list. You can then edit the details like Display, Display Format, etc. under the Parameter Details section below the list.

  9. In the Task Sequence tab, click on the Edit icon to edit the sequence of tasks to be executed, or the Add icon to add a flow task to the task sequence.

  10. Likewise, in the Parameters tab, you can use the Edit and Add icones (circled in the screenshot below) to edit a flow parameter or to add a new flow parameter to the flow pattern.

  11. After you are finished with the editing, click on the Save button followed by the Submit button on the top right-hand corner.

 

Fig. 6 - Copying a seeded flow pattern. The Copy icon is circled

Fig. 7 - Copying a flow pattern

Fig. 8 - The created copy shows up in the search results

Fig. 9 - The page of the copied flow pattern

Fig. 10 - Editing a flow task

Fig. 11 - Editing the flow task details

Fig. 12 - Editing the task sequence. The Edit and Add icons are circled

Fig. 13 - Editing the flow parameters. The Edit and Add icons are circled

Depending upon the number of tasks that make up the flow pattern there are two different UIs for the submission of the flow pattern: Submit a Payroll Process or Report (for flow patterns with only a single task) and Submit a Payroll Flow (for flow patterns containing more than one flow task).

Fig. 14 - The Calculate Payroll flow pattern which has only a single task

Fig. 15 - The Task Sequence only has two tasks: Start Flow and Calculate Payroll

Hence, for the Calculate Payroll flow pattern, the Task Sequence would be as follows: Start Flow will be executed first, since that is the start task associated with all flow patterns; then, the Calculate Payroll task will be executed; and upon completion of the Calculate Payroll task, the End Flow will be executed as the final task in the flow pattern.

Oracle Fusion Payroll - Hire Person

$
0
0

This tutorial deals with Fusion Payroll, and how it functions. In this part, we will discuss about the Hire Person module of Payroll.

Hire Person

Once the payroll processes are set up, the next step is to actually hire employees. To hire a person onto a payroll, you have to navigate to the following: Navigator -> New Person -> Hire an Employee.

The critical information required is as follows:

  • Legal Employer - Organisation or Company

  • Personal Info - Name, Date of Birth, Address, and Contact Numbers.

  • Employment Info - Business Unit, Department(s), Job or Position, Salary Info, Payroll Info, etc.

Once the critical information is provided, the Roles that are associated with the employee are to be defined. By default, the Employee role would be attached. Apart from this, other roles can be added to the employee.

After submission, the hire has to be approved. This comes under Approvals, where the approval hierarchy for the hiring of employees is defined. It can either be a direct approval or be done through an approval cycle.

To hire an employee, the following steps are to be followed:

  1. Click on the Navigator icon (circled in the screenshot below). Go to Workforce Management -> New Person.

  2. Click on Hire an Employee under Tasks on the left of the screen.

  3. Enter the Basic Details of the employee.

  4. On the Legal Employer dropdown, click on the Search… option to search and select the employer of the new hire.

  5. Enter the Personal Details of the employee.

  6. Click on the Next button.

  7. Enter the Person Information of the employee.

  8. Enter the Legislative Information of the employee. Here, the legislation-specific address validation takes place.

  9. Click on the Next button.

  10. Enter the Employment Information of the employee. Here, the relationship of the employee with the payroll is specified and integrated with various modules like Payroll, Compensation, and Benefits.

  11. Make sure that the Assignment Status is set to Active - Payroll Eligible for the employee to be eligible for Payroll Calculation.

  12. Enter the Manager Details, Payroll Details, and Salary Information of the employee.

  13. Click on the Next button.

  14. Enter the various roles associated with the employee. By default, the Employee role is set as the employee’s role.

  15. Click on the Add Role button to add a role for the employee.

  16. Enter a role name and click on the Search button.

  17. Select the required role(s) for the employee and click on the OK button. The select role(s) will then be added to the employee.

  18. Click on the Next button.

  19. Review the details on the screen showing the information you just entered for the employee. If any edits are to be made, use the Back button.

  20. After reviewing the details, click on the Submit button.

  21. Click on the Yes button on the pop-up message.

  22. On the confirmation message, click on the OK button.

Fig. 1 - Navigating to the Hire Person Work Area. The Navigator icon is circled

Fig. 2 - Hire an Employee

Fig. 3 - Searching and selecting a Legal Employer

Fig. 4 - The Hire Identification containing details of the employee to be entered

Fig. 5 - The Person Information of the hire. Here, legislation-specific address validation takes place

Fig. 6 - The Legislative Information of the hire

Fig. 7 - Searching and selecting a Business Unit

 

Fig. 8 - Employment Information of the hire. This is integrated with various modules - Payroll, Compensations, Benefits

Fig. 9 - Manager and Payroll Details of the employee

Fig. 10 - Salary Information of the employee

Fig. 11 - Benefit Information of the employee

Fig. 12 - The roles of the employee being hired. ‘Employee’ role is set by default

Fig. 13 - Searching and adding a role to the employee

Fig. 14 - Reviewing the details of the employee

Fig. 15 - Submitting the employee hire

Fig. 16 - The confirmation message

After creation, to assign Payroll Relationships associated with a hired employee, the following steps are to be followed:

  1. Click on the Navigator icon and go to Payroll -> Payroll Calculations.

  2. Click on Manage Payroll Relationships under Person in the task list on the left.

  3. Enter the name of the employee and select the Legislative Data Group from the dropdown.

  4. Click on the Search button.

  5. Click on the name of the employee from the search results.

  6. Click on the assignment name of the employee (circled in the screenshot below).

  7. You can add or remove Payroll Details to or from the employee by using the respective icons.

  8. Click on the Save button.

 

Fig. 17 - The Payroll Calculations on the Navigator menu

Fig. 18 - The Manage Payroll Relationships task

Fig. 19 - Searching for the employee

Fig. 20 - The Manage Payroll Relationships page. The assignment name is circled

Fig. 21 - Payroll Details of the employee

Fig. 22 - Adding a Payroll Detail to the employee

An employee is thus created and assigned to a Payroll. The next steps would be the transactions from the Payroll’s side.

Oracle Fusion Payroll - Payroll Processes, Payroll Process Results, and Person Process Results (Part 2)

$
0
0


This tutorial deals with Fusion Payroll, and how it functions. In this part, we will discuss about the UIs related to Payroll Processes.

Payroll Processes

As discussed in the previous part of the tutorial, there are specific processes available at the specific work areas. Depending on the work area’s functionality, it would have the specific payroll processes related to it. For example, the Payroll Calculation Work Area would have the payroll processes that are responsible for calculating the payroll.

It can be accessed by Navigator -> Payroll -> Payroll Flows -> Submit a Process or Report.

Note: For the following steps to be executed, payment methods have to be set up. This has already been discussed in the earlier tutorials, ‘Oracle Fusion Payroll - Payment Methods (Part 1),’  ‘Oracle Fusion Payroll - Payment Methods (Part 2),’ and  ‘Oracle Fusion Payroll - Payment Methods (Part 3).’

The Calculate Payroll Process

The Calculate Payroll process is the first payroll process to be executed from the Payroll Calculation work area.

To create a Calculate Payroll process, follow the below steps:

  1. Click on the Navigator icon (shown in the screenshot below). Go to Payroll -> Payroll Calculations to go to the Payroll Calculations work area.

  2. Click on Submit a Process or Report under Payroll Flows from the task list on the left (circled in the screenshot below).

  3. Select the Legislative Data Group from the dropdown.

  4. Search for and select the Calculate Payroll process (circled in the screenshot below) and click on the Next button on the top right-hand corner.

  5. Enter a name for the payroll flow.

  6. Search and select the Payroll using the magnifying glass icon (circled in the screenshot below).

  7. Search and select the Payroll Period using the magnifying glass icon (circled in the screenshot below).

  8. Select other details if needed and click on the Next button.

  9. Enter the flow interaction details (if any) by using the Add icon (circled in the screenshot below) and click on the Next button.

  10. Select a schedule for the process and click on the Next button.

  11. Review the details of the process and click on the Submit button.

  12. Click on OK and View Checklist to view the checklist of that process.

  13. If you click on the Go to Task icon, you will be shown a list of Processes and Reports for that process.

 

Fig. 1 - The Payroll Calculations work area in the Navigator menu

 

Fig. 2 - The Submit a Process or Report task

 

Fig. 3 - The Calculate Payroll process

 

Fig. 4 - Entering the flow details

 

Fig. 5 - Searching and selecting the payroll

 

Fig. 6 - Searching and selecting the Payroll Period

 

Fig. 7 - The flow interaction (optional). The Add icon is circled

 

Fig. 8 - Selecting a schedule for the process

 

Fig. 9 - Reviewing the details of the process

Fig. 10 - Submitting the payroll flow

 

Fig. 11 - The Checklist of the Calculate Payroll process

 

Fig. 12 - Processes and Reports for the Calculate Payroll process

The Calculate Prepayments Process

Once the Calculate Payroll process has been executed, The Calculate Prepayments process has to be executed.

To create a Calculate Prepayments process, follow the below steps:

  1. Click on the Navigator icon. Go to Payroll -> Payment Distribution to go to the Payment Distribution work area.

  2. Click on Submit a Process or Report under Payroll Flows from the task list on the left (circled in the screenshot below).

  3. Select the Legislative Data Group from the dropdown.

  4. Search for and select the Calculate Prepayments process and click on the Next button on the top right-hand corner.

  5. Enter the Payroll Flow name. Select the Payroll using the magnifying glass icon (circled in the screenshot below).

  6. Select the Process Start Date and Process End Date and click on the Next button.

  7. Enter flow interactions (if any) and click on the Next button.

  8. Select a schedule for the payroll flow and click on the Next button.

  9. Review the details of the payroll flow and click on the Submit button.

  10. Click on OK and View Checklist to view the checklist for the process.

  11. The Status column will show whether the process was successful.

  12. To view the errors, if any, click on the Errors and Warnings tab on the checklist page.

 

Fig. 13 - The Payment Distribution work area in the Navigator menu

 

Fig. 14 - The Submit a Process or Report task

 

Fig. 15 - The Calculate Prepayments process

 

Fig. 16 - Entering the flow details 

Fig. 17 - Reviewing the details

 

Fig. 18 - Submitting the payroll flow

 

Fig. 19 - Checklist showing error in status. To rollback the process, use the action shown

 

Fig. 20 - Message in case of error in calculating prepayments 

Oracle Fusion Payroll - Payroll Processes, Payroll Process Results, and Person Process Results (Part 3)

$
0
0

This tutorial deals with Fusion Payroll, and how it functions. In this part, we will discuss about the UIs related to Payroll Processes, Payroll Process Results, and Person Process Results.

Payroll Processes

As discussed in the previous part of the tutorial, there are specific processes available at the specific work areas. Depending on the work area’s functionality, it would have the specific payroll processes related to it.

Note: For the following steps to be executed, payment methods have to be set up. This has already been discussed in the earlier tutorials, ‘Oracle Fusion Payroll - Payment Methods (Part 1),’  ‘Oracle Fusion Payroll - Payment Methods (Part 2),’ and  ‘Oracle Fusion Payroll - Payment Methods (Part 3).’

The Make EFT Payments Process

After the prepayments are calculated, depending on the payment method set up for the employee, the appropriate payment process has to be run. To generate checks, the Generate Check Payments process is used. For making EFT payments, the Make EFT Payments is used.

To create a Make EFT Payments process, follow the below steps:

  1. Click on the Navigator icon. Go to Payroll -> Payment Distribution to go to the Payment Distribution work area.

  2. Click on Submit a Process or Report under Payroll Flows from the task list on the left (circled in the screenshot below).

  3. Select the Legislative Data Group from the dropdown.

  4. Search for and select the Make EFT Payments process and click on the Next button on the top right-hand corner.

  5. Enter the Payroll Flow name. Select the Payroll using the magnifying glass icon (circled in the screenshot below).

  6. Select the Process Start Date and Process End Date and click on the Next button.

  7. Search and select the Organisation Payment Method.

  8. Enter flow interactions (if any) and click on the Next button.

  9. Select a schedule for the payroll flow and click on the Next button.

  10. Review the details of the payroll flow and click on the Submit button.

Fig. 1 - The Make EFT Payments process

Fig. 2 - Entering the flow details

Fig. 3 - Searching and selecting the payment method

Fig. 4 - Reviewing the payroll flow details

Fig. 5 - Submitting the payroll flow

The Roll Back Process

The Roll Back process is an important process that is used in the case of any errors being present in the payroll execution. If you are not satisfied with a particular payroll process execution, or if you have detected any errors present in a process that has been executed, the roll back process can be used.

To create a Roll Back Process, follow the below steps:

  1. Click on the Navigator icon (shown in the screenshot below). Go to Payroll -> Payroll Calculations to go to the Payroll Calculations work area.

  2. Click on Submit a Process or Report under Payroll Flows from the task list on the left (circled in the screenshot below).

  3. Select the Legislative Data Group from the dropdown.

  4. Select Roll Back Process and click on the Next button on the top right-hand corner.

  5. Enter a name for the flow.

  6. Click on the magnifying glass icon (circled in the screenshot below) to search and select a payroll process. Click on the process name from the search results.

  7. Click on the Next button.

  8. Add a flow interaction (if any) by clicking on the Add icon (circled in the screenshot below).

  9. Click on the Next button.

  10. Select a schedule for the payroll process to run. If you want the process to run immediately, select As soon as possible. Or if you want to pick a specific schedule, use the Using a schedule option. You can select either Once, a schedule formula, Weekly, Daily, or Monthly. For the weekly, daily, and monthly options, you have to select a start and end date.

  11. Click on the Next button.

  12. Review the details of the payroll process and click on the Submit button.

Fig. 6 - The Payroll Calculations work area in the Navigator menu

Fig. 7 - The Submit a Process or Report task

Fig. 8 - Selecting a process

Fig. 9 - Entering the flow details. Click on the circled icon to select a payroll process

Fig. 10 - Searching and selecting a payroll process

Fig. 11 - Adding flow interaction. The Add icon is circled

Fig. 12 - Selecting a schedule for the payroll process

Fig. 13 - Picking a schedule

Fig. 14 - Reviewing the details of the payroll process

Fig. 15 - Submitting the payroll flow

Payroll and Person Process Results

After submitting the process(es), to view the Person Process Results, follow the below steps:

  1. Go to the Payroll Calculation work area from the Navigator menu.

  2. Click on View Person Process Results under Process Results from the task list on the left. Search for your Payroll Process Flow Name.

  3. Click on a Payroll Process Flow Name from the search results to view its person processing status.

  4. Click on the bar graph to view the persons processed by the flow.

  5. By clicking the name of a person, you can view their Person Process Results that include the Statement of Earnings, Earnings, Deductions, etc. of that person.

  6. To add or remove any of the details, click on the Control Details button to the top right-hand corner of the Quick Reference Summary section. You can use the arrow buttons to add and remove any details that are available.

Fig. 16 - The Payroll Process Flow Patterns

Fig. 17 - The processing status of the payroll flow

Fig. 18 - Persons processed by the flow above

Fig. 19 - Person Process Results of the person ‘Kapoor, Arjun’

Fig. 20 - Available and selected Control Details

Fig. 15 - Earnings of the person

Fig. 16 - Run Results of the person

To view the Payroll Process Results, follow the below steps:

  1. Go to the Payroll Distribution work area from the Navigator menu.

  2. Click on View Payroll Process Results under Process Results from the task list on the left.

  3. Search for your Payroll Name and click on the search result.

  4. Click on the Search button and click on a name from the search results.

Fig. 17 - The View Payroll Process Results task

Fig. 18 - Person Process Results of ‘Kapoor, Arjun’ for Prepayments process

SOA 11g Installation

$
0
0

Objective:In this article we will learn about the SOA 11g Installation.

Installing Oracle SOA Suite 11g requires:

  1. Installing Oracle Database 10g or 11g XE

  2. Creating schemas for Oracle SOA Suite in an Oracle Database nothing but installing Repository Creation Utility aka RCU.

  3. Installing Oracle WebLogic Server.

  4. Installing Oracle SOA Suite. (Part1 and Part2)

  5. Configuring a domain in Oracle WebLogic Server to support both Oracle SOA Suite and Oracle Enterprise Manager.

  6. Verify the Installation

So let's follow the above parts in order to start and complete the installation and configuration of Oracle SOA suite 11g.

1.Installing Oracle Database:

 For Database installation stepsclick here

2. Installation of Repository Creation Utility

For RCU installation stepsclick here.

 3. Install Weblogic server:

    For weblogic installation click here.

4. Installing Oracle SOA Suite

    Download the SOA Suite from here (All the above downloads can be found here):

http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html

(Be sure you are downloading 2 parts)

After downloading the2 parts of SOA Suite installation (2 ZIP files):

1. Unpack Part 1 & Part 2 zip files (Disk1-5 folders)

2. Inside Disk1 create a file called install.bat as below:

2016-02-01_15-40-38.png

3. after creating the install.bat, double click on it to start the installation.

2016-02-01_15-43-09.png

4. Welcome screen appears, click next

2016-02-01_15-44-15.png

5. Select skip software updates, click next.

2016-02-01_15-46-16.png

6. Verify that all of the validations completed successfully and click next.

2016-02-01_15-47-21.png

7. Choose the MW home directory(I have installed in D drive), and specify the Oracle Home Directory (you can leave

the default – Oracle_SOA1) and click next.

2016-02-01_15-48-31.png

8. Leave the default selected Weblogic server and click next.

2016-02-01_15-49-25.png

9. Verify that all installation details are correct and click Install:

2016-02-01_15-50-37.png

10. Follow the installation Progress and verify it complete successfully.

2016-02-01_15-51-54.png

11. During the installation you will be prompted to point on the location of ‘Disk4&5’ folders that   were already

unpackprior the installation). Browse the Disk 4 and Disk5 and click next.

2016-02-01_15-53-18.png

12. When finished click next. Save the installation details (include details like directories, URL’s, ports numbers etc..)

and click Finish.

2016-02-01_15-54-31.png

5. Configuring a domain in Oracle Weblogic Server for SOA suite.

In this part we will run the ‘Oracle Fusion Middleware Configuration Wizard’. The wizard will help us to create a

WebLogic domain, that will contain an Administration server & Managed Server/s.

1. Start the wizard by running the file config.cmd (In my installation:

D:\Oracle_11.1.1.6\Middleware\Oracle_SOA1\common\bin).

2. Choose ‘Create New Weblogic Domain’ and click Next:

2016-02-01_15-56-38.png

3. Choose the required product that will be supported by the created domain.

In this example I choose ‘Oracle SOA Suite’, ‘Oracle BPM Suite’, ‘Oracle BAM’ & ‘Oracle Enterprize Manager’.

Note that ‘Oracle WSM Policy Manager’ & ‘Oracle JRF’ were marked automatically as well.

        Click next

2016-02-01_15-57-42.png

4. In the Domain Name and Location you can leave the default values (verify the MW home is correct) or change it,

and click next:

2016-02-01_15-58-57.png

5. Put the Administration User, password details and click Next:

2016-02-01_15-59-59.png

6. Choose ‘Sun SDK’ for better performance,click Next.

2016-02-01_16-01-06.png

7. Enter the JDBC details (like those entered when configured RCU) in the following order:

1) Check all Component Schema

2) Enter the schema password

3) Enter the Service name

4) Enter the Host Name

5) Enter Port number

for eg: select BAM schema and provide below details:

Schema Owner: DEV_ORABAM

DBMS/Service: XE

Hostname: localhost

Port:1521

Schema password: the password entered while installing DB

When finish click Next:

2016-02-01_16-02-17.png

8. Verify the JDBC test connection completed successfully and click Next:

2016-02-01_16-03-14.png

9. In the configuration settings, select Administration server, Managed servers, clusters and machines, Deployment and services.

In this example i will configure the admin server which hosts Oracle Enterprise Manager Fusion Middleware Control for performing administrative tasks; the Managed Server (soa_server1 and bam_server1) is an instance of an Oracle WebLogic Server used to host deployed applications.

Click next.

2016-02-01_16-04-05.png

10.  In the configuration of Administration server, specify the name as Admin server and port number 9001 (i have used 9001 because other ports are already in use), click next.

2016-02-01_16-05-05.png

11. In the configuration of managed servers, change the listen port for soa_server1 to 9002 and bam_server1 port to 9003 and click next.

2016-02-01_16-05-57.png

12. Assign the Admin server to Local machine and click next

2016-02-01_16-06-38.png

13. check the target deployments and services and click next.

2016-02-01_16-07-24.png

2016-02-01_16-08-16.png

14. In the configuration summary check the details and click create

2016-02-01_16-09-24.png

15. Make sure the installation finishes successfully. click done

2016-02-01_16-10-22.png

6. Verify the Installation

You can verify the installation by starting Admin and Managed servers.

1. Open a command window and go to D:\Oracle_11.1.16\Middleware\user_projects\domains\soa_domain\bin) and run the command: startWebLogic.cmd

2. make sure the server starts in Running mode...

3. Open a command window and go to D:\Oracle_11.1.16\Middleware\user_projects\domains\soa_domain\bin) and run the command: startManagedWebLogic.cmd soa_server1

On the command window log you will be prompt to give the user name/password. Put the user/password you’ve provided while configuring the domain.

make sure the soa_server1 also starts in Running Mode.

4) Repeat the above steps to start bam_server1

5) Once all the servers are in running state access the consoles.

1. WebLogic: http://localhost:9001/console

2. Enterprise Manager: http://localhost:9002/em

3. BAM http://localhost:9003/OracleBAM

OBIEE - How to Use Variables in Oracle BI Repository

$
0
0

About

This chapter illustrates how to use variables in the Oracle BI Repository to streamline administrative tasks and dynamically modify metadata content in order to adjust to a changing data environment.

1 - Session Variables

Session variables are like dynamic repository variables in which values are obtained from initialization blocks. Unlike dynamic repository variables, the initialization of session variables is not scheduled. When a user begins a session, Oracle BI Server creates new instances of session variables and initializes them.

Unlike a repository variable, there are many instances of session variables as there are active sessions on oracle BI Server. Each instance of a session variable can be initialized to a different value. A session is an instance of a user running the client application. The session starts when a user opens the application and ends when the user closes it.

The system variables exist in two forms:

  • System

  • Non-system

Figure 1

  1. System Session Variables

System Session Variables are session variables that Oracle BI Server and Oracle BI Presentation services use for specific purposes.

System Session Variables have reserved names, which cannot be used for other kinds of variables. An example is USER, which holds the value that the user entered as a logon name. The values are used to refresh the GROUP, DISPLAYNAME, USER, and LOGLEVEL variables.

Figure 2

  1. Non System Session Variables

Any system variable you create, you have to create the initialization block. Unlike system session variables, which have reserved names and are used for specific purposes, non-system session variables can be created by administrator to serve a purpose for a specific application. A common use for non-system session variables is setting user filters.

For example, you can define a non-system variable called Region that is initialized to the name of the user’s sales region. You can then set a security filter for all members of a group that enables them to see only the data pertinent to their region.

In the below example, an initialization block named Region populates the variable Region with the value East.

Figure 3

  1. Initialization Blocks

Initialization Blocks are used to initialize system and non-system variables, as well as dynamic repository variables.

SQL is specified by the initialization block to be run to populate one or more variables by accessing data sources.

The blocks are invoked during Oracle BI Server start-up and are periodically rerun to refresh the values for dynamic variables according to an established schedule.

Figure 4

Example

The initialization block determines the latest dates contained in the source data and store it in variables:

Figure 5

The name of this initialization block is CurrentPeriods.

The initialization block is scheduled to refresh every hour.

The data source is the data source identified in the SUPPLIER CP connection pool.

The SQL queries D1_CALENDAR2 table for latest Day, Month, and Year data based on the most recent period key in the D1_ORDERS2 table, and then populates the CurrentDay, CurrentMonth and CurrentYear variables.

  1. Edit Data Source

To edit the data source perform the steps below:

  1. Enter a default query or enter the query using database-specific SQL.

  2. Click the Test button to test the query.

  3. Click the Browse button to select the connection pool.

Figure 6

The SQL must refer the physical tables that can be accessed using the connection pool specified in the Connection Pool field. If you want the query for an initialization block to have database-specific SQL, you can select a database type for that query. If a SQL initialization string for that database type has been defined when the initialization block is instantiated, this string is used. Otherwise, a default initialization SQL string is used.

When you create SQL and submit it directly to the database, bypassing Oracle BI Server (for example, when creating initialization blocks), you should test the SQL using the Test button.  If the SQL contains an error, the database returns an error message.

  1. Edit Data Target

  • Create new variables.

  • Use Up and Down button to rearrange variable order. The order of the variables must match the order of the corresponding columns in the initialization block SQL query.

  • Remove and Edit the variables.

Figure 7

Example:

It is recommended that you create a dedicated connection pool for initialization blocks. This connection pool should not be used for queries. Additionally, it is recommended that you isolate the connection pools for different types of initialization blocks. This also makes sure that authentication and login-specific initialization blocks do not slow down the login process. The following types should have separate connection pools:

All authentication and login-specific initialization blocks such as language, externalized strings, and group assignments.

All initialization blocks that set session variables and repository variables.

These initialization blocks should always be run with using credentials with administrator privileges.

Figure 8

Figure 9

2- Setting an implicit fact column

2.1- Business Challenge: Dimension only queries

In this context, Dimension only queries refer to queries that contain columns from more than one dimension with no fact columns included.  A dimension-only query with columns from the same dimension does not create a problem.

There may be occasions when users want to build queries with only dimension data. For example, a user might want to see all products purchased by a customer. However dimension-only queries may not return the desired results. This is because in a business model with conforming dimensions, many fact tables may join to the same dimensions. For example, a sales fact and a service fact both join to the product dimension.

When a user runs a dimension-only query, Oracle BI Server picks the most economical fact source based on the number of joined dimensions. This may not return the desired results.

2.2- Business Solution: Implicit Fact

Implicit fact is a column that is added automatically to dimension-only queries.

Note:

The column is included in the query but not shown in the results.

It provides the ability to set a fact table source for a subject area and expected results for dimension-only queries.

It forces Oracle BI Server to select a predominant fact table source even if it is not the most economical source.

It specifies a default join path between dimension tables when there are several possible alternatives.

Example:

Figure 10

2.3- To Set an implicit fact

Figure 11

Verify the Result

Run a dimension-only analysis and verify that the correct results are obtained.

Figure 12

Check the query log file and verify that the implicit fact column and corresponding fact table are accessed.

Figure 13

 

OBIEE - Modelling Time Series & Many-to-many Relationships

$
0
0

Introduction

Dimensional schemas work well for modelling a particular part of a business where there are one-to-many relationships between the dimension table and fact tables. However, sometimes it is necessary to model many-to-many relationships between the fact and dimension tables.

This article covers two topics:

Modelling Time Series data

Modelling many-to-many relationships

  1. Modelling Time Series Data

1.1- Time Comparisons

Time dimension is little different from other dimension because any other dimension is levelled clearly. Time dimension is little critical in the SQL development because if you want to see the right set of data at right time. Time comparison is made for the sales till date between different time periods. OBIEE made simple analysis of sales by month, quarter, and year.  The ability to compare business performance with previous time periods is essential for understanding the business. Time comparison enables businesses to analyse data that spans multiple time periods, providing a context for the data.

Example:

Time comparison made for different time periods is shown below:

Figure 1

      1. Business Challenge

SQL was not designed to make direct comparisons over time. For example, to compare this year’s sales to last year’s sales, you must run three separate queries as given below:

What were this year’s sales?

What were the previous year’s sales?

How to perform the comparison between last year and current year sales?

 

Figure 2:

 

      1. Business Solution

OBIEE make single representation for all the queries. The solution is to model time series data in Oracle BI Repository. This enables users to make one request for the desired result. Oracle BI Server runs multiple queries in parallel to get the desired results. The queries that run in the background to support the time measure are transparent to the user.

Figure 3



      1. Time Dimensions

SQL does not provide a direct way to make time comparisons. So you must model time series data in the Oracle BI Repository.

The Time dimensions is set based on the period table in data warehouse. Then the measures that take advantage of time dimension to use the AGO, TODATE, and PERIODROLLING functions.

Compared to modelling an ordinary dimension, the time dimension requires two steps:

  • Select the time option in the Logical Dimension dialog box.

  • Designate a chronological key for every level of the dimension hierarchy.

At query time, the Oracle BI Server then generates the highly optimized SQL that pushes the time offset processing down to the database wherever possible, resulting in best performance and functionality.

Figure 4

      1. Time Series Functions

Oracle BI Server provides AGO, TODATE, and PERIODROLLING time series functions for time series comparisons.

AGO function calculates aggregated value as of some time period shifted from the current time. For example, this function is user

TODATE function aggregates a measure attribute from the beginning of a specified time period to the currently displayed time.

PERIODROLLING function performs an aggregation across a specified set of query grains period, rather within a fixed time series grain. The most common use is to create rolling averages, such as 13-week rolling average for sales.

Query Grain

It is the lowest time grain of the request. For example the query grain is Month.

Time Series Grain

It is the grain at which the aggregation or offset is requested, for both AGO and TODATE functions. In the example shown below, the time series grain is Quarter. Time series functions are valid only if the time series grain is at the query grain or longer.

Note:

The PERIODROLLING function does not have a time series grain. Instead, you specify a start and end period in the function.

      1. Storage Grains

The example report shown below can be computed from daily sales or monthly sales. The grain of the sources is called storage grain. In time dimension, the chronological key is set to order data in the desired chronology. A chronological key must be defined at this level for the query to work, but performance is generally better if a chronological key is defined at the query grain.

In the example report shown below,

Figure 5

Dollars Qago is an example of the AGO function. It compares the dollars to dollars a quarter ago.

Dollars QTD is an example of the TODATE function. It accumulates dollars from the beginning of each quarter to the end of each quarter.

Dollars 3-Period Rolling Sum and Dollars 3-Period Rolling Avg are examples of the PERIODROLLING function. For instance, for Dollars 3-Period Rolling Sum, the three-month rolling sum starts two periods in the past and includes the current period. That is, for the month 2008/07, the rolling sum includes 2008/07; the rolling sum includes 2008/05, 2008/06, and 2008/07.

Example:

Figure 6

      1. Create Measure and add them to Presentation Layer

Use the Expression Builder to build a measure by using AGO function with the following form:

AGO (<<Measure>>,<<Level>>,<<Number of Periods>>)

Figure 7

Use the Expression Builder to build a measure by using the TODATE function with the following form:

TODATE (<<Measure>>,<<Level>>)

Figure 8

      1. Test Results

Use the Expression Builder to build a measure by using the PERIODROLLING function with the following form:

PeriodRolling(<<Measure>>, <<integer>>, <<integer>>)

Figure 9

Create analysis and verify the results.

Figure 10

  1. Modelling Many-to-Many Relationships

In dimension model, you have one-to-many relationship. In order to model many-to-many relationship, many-to-many relationship is break down to one-to-many relationship by using an additional table called bridge table.

Dimensional star schemas are ideal for modelling a business when one-to-many relationships exist between the dimension tables and fact tables.

Challenge: It is often necessary to model many-to-many relationships between dimension tables and fact tables.

Solution: Use a bridge table to model many-to-many relationships.

    1. Bridge Table

  • It resolves many-to-many relationships between dimension tables and fact tables.

  • It stores multiple records corresponding to a dimension.

  • It contains a weight factor column representing the ratio of the many-to-many relationship.

    • For example, if two sales representatives are associated with a given sales commission, the weight factor for each representative would be 0.50.

    • The weight factor is multiplied by the commission amount to yield each representative’s share of the commission.

    • More complex factors can be used (for example, 0.50, 0.25, 0.25) as long as the sum of all factors is 1.

When you need to model many-to-many relationships between dimension tables and fact tables, you can create a bridge table that resides between the fact and dimension tables. A bridge table stores multiple records corresponding to that dimension.

Example 1:

  • Each sales representative may participate in many deals that pay commission.

  • Each deal may include many sales representatives who split the commission.

  • A bridge table is required to model many-to-many relationship between the commission fact table and sales representative dimension table.

Figure 11

In the example above, you model a bridge table to resolve many-to-many relationship between a commission fact table and a sales representative dimension table.

Example 2:

Use known techniques to import the commission fact tables and commission bridge tables to the physical layer.

Figure 12

Use the bridge table to model many-to-many relationship between the commission fact and the sales representative in the physical layer.

Figure 13

Use physical columns to create a measure that calculates “commission amount” * “weight factor”.

Figure 14

Use an analysis and query log to verify the results.

Figure 15

 

Introduction

Dimensional schemas work well for modelling a particular part of a business where there are one-to-many relationships between the dimension table and fact tables. However, sometimes it is necessary to model many-to-many relationships between the fact and dimension tables.

This article covers two topics:

Modelling Time Series data

Modelling many-to-many relationships

  1. Modelling Time Series Data


1.1- Time Comparisons

Time dimension is little different from other dimension because any other dimension is levelled clearly. Time dimension is little critical in the SQL development because if you want to see the right set of data at right time. Time comparison is made for the sales till date between different time periods. OBIEE made simple analysis of sales by month, quarter, and year.  The ability to compare business performance with previous time periods is essential for understanding the business. Time comparison enables businesses to analyse data that spans multiple time periods, providing a context for the data.

Example:

Time comparison made for different time periods is shown below:

Figure 1

  1. Business Challenge

SQL was not designed to make direct comparisons over time. For example, to compare this year’s sales to last year’s sales, you must run three separate queries as given below:

What were this year’s sales?

What were the previous year’s sales?

How to perform the comparison between last year and current year sales?


Figure 2:


  1. Business Solution

OBIEE make single representation for all the queries. The solution is to model time series data in Oracle BI Repository. This enables users to make one request for the desired result. Oracle BI Server runs multiple queries in parallel to get the desired results. The queries that run in the background to support the time measure are transparent to the user.

Figure 3



  1. Time Dimensions

SQL does not provide a direct way to make time comparisons. So you must model time series data in the Oracle BI Repository.

The Time dimensions is set based on the period table in data warehouse. Then the measures that take advantage of time dimension to use the AGO, TODATE, and PERIODROLLING functions.

Compared to modelling an ordinary dimension, the time dimension requires two steps:

  • Select the time option in the Logical Dimension dialog box.

  • Designate a chronological key for every level of the dimension hierarchy.

At query time, the Oracle BI Server then generates the highly optimized SQL that pushes the time offset processing down to the database wherever possible, resulting in best performance and functionality.

Figure 4


  1. Time Series Functions

Oracle BI Server provides AGO, TODATE, and PERIODROLLING time series functions for time series comparisons.

AGO function calculates aggregated value as of some time period shifted from the current time. For example, this function is user

TODATE function aggregates a measure attribute from the beginning of a specified time period to the currently displayed time.

PERIODROLLING function performs an aggregation across a specified set of query grains period, rather within a fixed time series grain. The most common use is to create rolling averages, such as 13-week rolling average for sales.

Query Grain

It is the lowest time grain of the request. For example the query grain is Month.

Time Series Grain

It is the grain at which the aggregation or offset is requested, for both AGO and TODATE functions. In the example shown below, the time series grain is Quarter. Time series functions are valid only if the time series grain is at the query grain or longer.

Note:

The PERIODROLLING function does not have a time series grain. Instead, you specify a start and end period in the function.


  1. Storage Grains

The example report shown below can be computed from daily sales or monthly sales. The grain of the sources is called storage grain. In time dimension, the chronological key is set to order data in the desired chronology. A chronological key must be defined at this level for the query to work, but performance is generally better if a chronological key is defined at the query grain.

In the example report shown below,

Figure 5

Dollars Qago is an example of the AGO function. It compares the dollars to dollars a quarter ago.

Dollars QTD is an example of the TODATE function. It accumulates dollars from the beginning of each quarter to the end of each quarter.

Dollars 3-Period Rolling Sum and Dollars 3-Period Rolling Avg are examples of the PERIODROLLING function. For instance, for Dollars 3-Period Rolling Sum, the three-month rolling sum starts two periods in the past and includes the current period. That is, for the month 2008/07, the rolling sum includes 2008/07; the rolling sum includes 2008/05, 2008/06, and 2008/07.

Example:

Figure 6

  1. Create Measure and add them to Presentation Layer

Use the Expression Builder to build a measure by using AGO function with the following form:

AGO (<<Measure>>,<<Level>>,<<Number of Periods>>)



Figure 7

Use the Expression Builder to build a measure by using the TODATE function with the following form:

TODATE (<<Measure>>,<<Level>>)

Figure 8

  1. Test Results

Use the Expression Builder to build a measure by using the PERIODROLLING function with the following form:

PeriodRolling(<<Measure>>, <<integer>>, <<integer>>)

Figure 9

Create analysis and verify the results.




Figure 10


  1. Modelling Many-to-Many Relationships

In dimension model, you have one-to-many relationship. In order to model many-to-many relationship, many-to-many relationship is break down to one-to-many relationship by using an additional table called bridge table.

Dimensional star schemas are ideal for modelling a business when one-to-many relationships exist between the dimension tables and fact tables.

Challenge: It is often necessary to model many-to-many relationships between dimension tables and fact tables.

Solution: Use a bridge table to model many-to-many relationships.


  1. Bridge Table


  • It resolves many-to-many relationships between dimension tables and fact tables.

  • It stores multiple records corresponding to a dimension.

  • It contains a weight factor column representing the ratio of the many-to-many relationship.

    • For example, if two sales representatives are associated with a given sales commission, the weight factor for each representative would be 0.50.

    • The weight factor is multiplied by the commission amount to yield each representative’s share of the commission.

    • More complex factors can be used (for example, 0.50, 0.25, 0.25) as long as the sum of all factors is 1.

When you need to model many-to-many relationships between dimension tables and fact tables, you can create a bridge table that resides between the fact and dimension tables. A bridge table stores multiple records corresponding to that dimension.

Example 1:

  • Each sales representative may participate in many deals that pay commission.

  • Each deal may include many sales representatives who split the commission.

  • A bridge table is required to model many-to-many relationship between the commission fact table and sales representative dimension table.

Figure 11

In the example above, you model a bridge table to resolve many-to-many relationship between a commission fact table and a sales representative dimension table.

Example 2:

Use known techniques to import the commission fact tables and commission bridge tables to the physical layer.

Figure 12

Use the bridge table to model many-to-many relationship between the commission fact and the sales representative in the physical layer.

Figure 13

Use physical columns to create a measure that calculates “commission amount” * “weight factor”.

Figure 14

Use an analysis and query log to verify the results.

Figure 15



OBIEE - Role-based Access Control Model in Oracle BI

$
0
0

About

Oracle BI uses a role-based access control model. Security is defined in terms of the Application roles that are mapped to directory server groups and users.

  1. Clear Implicit Fact

Implicit Fact is required only if you have two fact tables. Oracle provides you an option to clear the implicit fact table if you need single fact table alone.

To remove the implicit fact column, Click the Clear button in the Subject Area properties box.

Figure 1

  1. Security

Authentication layer is built-in to the Oracle BI to check whether the correct user login to the system.  Setting up the security involves the following steps:

  • Identify and describe security settings for Oracle BI Server.

  • Create Users and Groups.

  • Create Application roles.

  • Set up permissions for repository objects.

  • Use query limits, timing restrictions, and filters to control access to repository information.

  1. Business Challenge

Who will have access to company data and business resources?

Under what conditions will access be limited or denied?

How will access be enforced?

How will users authenticate themselves?

Where will credentials be stored?

2.2- Business Solution

The solution for securing Oracle BI Server can be divided into two broad categories by controlling access to the components within the BI domain (resource access security) and controlling access to business source data (data access security).

Controlling access to system resources is achieved by the following steps:

It requires users to be authenticated during login process.

It restricts users to only those resources for which they are authorized.

It manages user identities, credentials, and permission grants. This allows you to control system access by validating users at login (authentication) and control access to specific Oracle BI components and features according to a user’s permission grants (authorization).

  1. Managing Oracle BI Security

Oracle BI integrates with Oracle Fusion Middleware’s security platform:

Oracle WebLogic server Administration Console manages users and groups for the embedded LDAP server that serves as the default identity store.

Oracle Fusion MiddleWare Control manages policy store application roles that grant permissions to users, groups, and other application roles.

Oracle BI Administration tool manages permissions for presentation layer objects and business model objects in the repository.

  1. Default Security Model

During installation, three Oracle BI security controls are preconfigured with initial or default values to form the default security model:

Identity Store contains the definitions of users, groups, and group hierarchies required to control authentication.

Policy Store contains the definition of application roles, the permissions granted to the roles, and the members (users, groups, and application roles) of the roles. It is designed to hold the application-role and permission-grant mappings to users and groups that are required to control authorization.

Credential Store stores the security-related credentials, such as user name and password combinations, for accessing an external system (such as database or LDAP server).

  1. Default Security Alarm

You can access the Administration Console can be accessed with the following URL: http://<machine name>:7001/Console. The Oracle WebLogic Server Administration console is opened in the figure below:

Figure 2

On the left side of the console, under Domain Structure, note that there is a single weblogic domain named bifoundation_domain into which of the BI Applications are deployed.

The OBI installer installs a single domain with a single security alarm namely myrealm in it. A security realm is a container for the mechanisms that are used to protect WebLogic resources. This includes users, groups, security policies, and security providers. Whereas multiple security realms can be defined for the BI domain, only one can be active.

Click myrealm to view its settings.

  1. Default Authentication Provider

An authentication provider establishes the identity of users and system processes, transmits identity information, and serves as a repository from which components can retrieve identity information.

When a user logs in to a system with a username and password combination, Oracle WebLogic Server validates identity based on the combination provided.

Alternative security providers can be used if desired and managed in the Oracle WebLogic Administration console, but the WebLogic Authentication provider is used by default.

Note : There is a default WebLogic identity Assertion Provider, which is used primarily for Single Sign On.

Figure 3

  1. Default Users

The default identity store contains user names that are specific to Oracle BI. These default user names are provided as a convenience so you can begin using the Oracle BI Software immediately after installation, but you are not required to maintain the default names. In the example shown below, the users are BISystemUser and weblogic.

Figure 4

Weblogic is the administrative user. After installation, a single administrative user is shared by Oracle BI and Oracle WebLogic server. The same username and password that were supplied during the installation process are used for both. The username that is created during installation can be any desired name and need not to be Administrator.

The password is also provided during installation and can be changed afterwards by using the administrative interface for the identity store. In the default security configuration, an administrative user is a member of the BIAdministrators group and has all rights granted to the Oracle BI Administrator user in earlier releases, which is the exception of impersonation. The administrative user cannot impersonate other users.

Oracle BI System components now establish a connection to each other as BISystemUser instead of as the Administrator. Using a trusted system account such as BISystemUser to secure communication between components enables you to change the password of your deployment’s system administrator account without affecting communication between components.

  1. Default Groups

Figure 5

Groups are logically ordered sets of users. Creating groups of users who have similar needs for access to system resources enables easier security management. Managing a group is more efficient than managing a larger number of users individually. Oracle recommends that you organize users into groups for easier maintenance. Groups are then mapped to application roles in order to grant rights. Three default group names are provided as a convenience so you can begin using the Oracle BI Software immediately after installation, but you are not required to maintain the default names.

BIAdministratorsgroup: Members have the equivalent permissions of the Administrator user of earlier releases with the exception of the ability to impersonate. The Administrator user of earlier releases could impersonate, but members of the BIAdministrators group cannot impersonate other users.

BIAuthors group: Members have the permissions necessary to read/create content for other users to use.

BIConsumers group: Members have the permission to use content created by other users. The BIConsumers group represent all users who have been authenticated by Oracle BI. By default, every Oracle BI authenticated user is part of BIConsumers group and does not need to be explicitly added to the group. The BIConsumers group includes the Oracle WebLogic server users group as a member.

OBIEE- Object Level Security in Oracle BI

$
0
0

About

This chapter discusses about object level security which is set at web catalog level on folders, dashboards, dashboard pages and reports.

  1. Verify Security Settings

To make policy store changes visible throughout Oracle BI, you must restart Oracle BI Server.

In this example, JCRUZ has logged into Oracle BI and selected My Account. On the Roles and Catalog Groups tab, he sees all the roles to which he is assigned.

Jose Cruz is a member of the Sales Managers Group and Sales Supervisors group. Because both of these roles are members of the Sales Associates Role application role, he is also a member of the role. By default, all BI users are also the members of the default application roles, Authenticated User and BI Consumer Role.

The value of using application roles comes from the fact that you can move the system you have built between environments without having to rewire all of the security. For example, you would not have to change the security settings in your presentation catalog or repository. You can just remap your application roles to the target environment.

After you restart Oracle BI Server, changes in security settings are visible in the Identity manager in the repository.

  1. Set up Object Permissions

The Object Permissions are set by using the Admin tool. There are two approaches for setting the Object permissions.

  • You can set the permissions for particular users or application roles in the Identity Manager if you want to define permission for larger set of objects at one time.

  • Permissions can be set for individual objects in the presentation layer.

In this example, permissions are set for the Customer presentation table object. Access to this object is restricted for the AuthenticatedUser, BIConsumer, SalesAssociateRole application roles. The user AZIFF is a member of these application roles. Therefore, AZIFF does not have access to the Customer presentation table when he logs into Oracle BI and select the SupplierSales subject area.

  1. Permission Inheritance

Users can have explicitly granted permissions. They can also have permissions granted through membership in application roles, which in turn can have permissions granted through membership in other application roles, and so on.

  • Permissions granted explicitly to user take precedence over privileges granted through application roles.

  • Permissions granted explicitly to application role take precedence over any privileges granted through other application roles.

  • If Security attributes conflict at the same level, a user or application role is granted the least-restrictive security attribute.

Example:

  • User1 is a direct member of Role1 and Role2, and is an indirect member of Role3, Role4 and Role5.

  • The resultant permissions from Role1 are NO ACCESS for TableA, READ for TableB, and READ for TABLEC.

  • The total permissions granted to User1 are Read access for TableA, TableB, and TableC.

  1. Set Row Level Security (Data filters)

Data filters are a security feature that provides a way to enforce row-level security rules in the repository. Data filters can be set of objects in both the BMM layer and the Presentation Layer. Applying a filter on a logical object affects all Presentation layer objects that use the object.

In this example, you set a filter on the Customer presentation table for the SalesSupervisorsRole application role so that the customer data is visible for only those records in which JoseCruz or his direct reports are the Sales representatives. After setting this filter, if Jose Cruz creates and runs an analysis that includes the SalesRep column, only his records and those of his direct report are visible.

  1. Set the Query Limits

Oracle BI Server prevents queries from consuming too many resources by limiting how long a query can run and how many rows a query can retrieve.

To access the Query Limits tab, open the Identity Manager, click the Application Roles tab, double-click an application role to open the Application Role dialog box, and click Permissions.

Use the Query Limits tab to:

  • Limit queries by maximum run time or to time periods for a user or role.

  • Control the number of rows accessed by a user or role.

  • Control the maximum query run time.

  • Enable or disable Populate Privilege.

  • Enable or disable Execute Direct Database Requests

Note:

It is a recommended practice to set query limits for application roles rather than for individual users.

  1. Set Timing Restrictions

You can regulate when users can query databases to prevent users from querying when system resources are tied up with batch reporting, table updates, or other production tasks.

To restrict access to a database during particular time periods, click the ellipsis (...) button in the Restrict column to open the Restrictions dialog box. Then perform the following steps:

  1. To select a time period, click the start time and drag it to the end time.

  2. To explicitly disallow access, click Disallow.

  1. Cache Management

  1. Business Challenge

Decision support systems require a large amount of database processing.

Frequent trips to back-end databases to satisfy query requests can result in increased query response time andPoor Performance.

 

 

OBIEE - Application Roles & Policies in Oracle Enterprise Manager – Fusion Middleware Control

$
0
0

Introduction

This chapter describes the application roles and application policies that are managed in Oracle Enterprise Manager – Fusion Middleware Control. Application roles are new with OBIEE 11g and replace groups within OBIEE 10g.

  1. Default Application Role

An application role defines a set of permissions that are granted to a user or group. Application roles are defined in FMW control which can be accessed via http://<machinename>:7001/em. To access the Application roles page, right-click coreapplication in the left pane and select Security->ApplicationRoles.

Default application roles include:

BISystem: Grants the permission necessary to impersonate other users. This role is required by Oracle BI System components for inter-component communication.

BIAdministrator: Grants the administrative permissions necessary to configure and manage the Oracle BI installation. Any member of the BIAdministrator group is explicitly granted this role and implicitly granted the BIAuthor and BIConsumer roles.

BIAuthor: Grants the permission necessary to create and edit content for other users to use. Any member of the BIAuthor group is explicitly granted this role and implicitly granted the BIConsumer role.

BIConsumer: Grants the permission necessary to use the content created by other users.

Figure 1

  1. Default Application Policies

Application policies are the authorization policies that an application relies upon for controlling access to its resources. Application policies are defined in Fusion Middleware control. To access the application policies page, right-click coreapplication in the left pane and select Security->Application Policies.

The default file-based policy store contains the Oracle BI permissions. An example of permission is oracle.bi.server.manageRepositories, which grants permission to open repositories in online mode in the Oracle BI Administrator tool. This permission is granted to the BI Administrator role.

Figure 2

Note:

These policy permissions are not the same as those used to define access to BI objects (metadata, dashboards, reports and so on). Policy store permissions are used only to define the BI functionality that assigned roles can access.

  1. Default Security Settings in RPD

Figure 3

  • Open the repository in online mode to see the default security settings. Repository security should be managed in online mode. Select Manager->Identity to open the Identity Manager.

  • On the Users tab you can see the same set of users as those listed in the WebLogic server Administrator Console.

  • The Application Roles tab shows all application roles in the policy store.

  • The repository holds a cache of the identities, so users and application roles are visible in offline mode as well as online mode.

  1. Application Role Hierarchy

Figure 4

The above example illustrates the relationships among users, groups, application roles, and permissions.

The diagram in the example shows these relationships among the default application roles and the ways in which permissions are granted to users.

The table shows the role and permissions granted to all group members (users). In this example only one of the permissions granted by each role is shown.

  1. Create Groups

You use the WebLogic server Administration console to create groups. Groups are logical ordered set of users. Managing a group is more efficient than managing a larger number of users individually.

  • The default identity store provided for managing users and groups is Oracle WebLogic Server’s embedded directory server.

  • In this example, three new groups are added: SalesAssociatesGroup, SalesManagersGroup and SalesSupervisorsGroup.

  • When you click new button, a dialog box opens to create a new group.

Figure 5

  1. Create Group Hierarchies

  • Security realm in the WebLogic Administrator console is used to create group hierarchies.

  • On the Users and Groups tab in the security realm, click a group on the Groups subtab to view settings for the group.

  • On the Membership subtab, you can assign groups to other groups.

Figure 6

The example shows the group membership settings for the SalesSupervisorsGroup group. The SalesSupervisorsGroup group is a member of the SalesAssociatesGroup group. This means that any privileges assigned to the SalesAssociate group are inherited by the Sales Supervisors group.

  1. Create Users

Use WebLogic Server Administrator Console to create users. The default identity store provided for managing users is Oracle WebLogic Server’s embedded directory server. In the below example, two users AZIFF and JCRUZ are added.

When you click the New button, a dialog box is opened to create a new user. In the dialog box, you provide the user name, description and password.

Figure 7

  1. Assign Users to Groups

  • On the “Users and Groups” tab in the Security realm, click a user on the Users subtab to view settings for the user.

  • On the Groups subtab, you can assign users to groups.

Figure 8

This example shows the group settings for JCRUZ user. JCRUZ is a member of the Sales Manager and Sales Supervisors group.

  1. Create Application Roles

Oracle recommends that you map groups and other application roles to application roles and not to individual users. Once mapped, all members of the groups and roles are granted the same rights. Controlling membership in a group reduces the complexity of tracking access rights for multiple individual users.

Figure 9

  1. Map Application Roles

Once an application role is created, you can map the application role to users or groups defined in the LDAP server, or you can map application role to other application roles.

In the example shown below, the SalesAssociateRole is mapped to the Sales Associates group, the SalesManager application role, and the SalesSupervisors application role.

Figure 10

Note:

It is possible to add individual users to a role, but the best practice is to add groups or application roles, not individual users, to application roles.

Figure 11

Figure 12

Why To Use HCM Data Loader

$
0
0

HCM Data Loader compared to File Based Loader

HCM Data Loader  also referred to as HDL ( abbreviated form) is the next generation tool from Oracle for loading legacy HCM data into Fusion Applications.

Starting with Release 9 July Monthly Update (Monthly Update Bundle 9.7).

Oracle strongly recommends that all NEW Customers begin using HDL.

Customers currently provisioned on Release 9 will require a configuration change.

All environments provisioned in Release 10 will be defaulted to HDL.

Existing Customers may continue using File Based Loader (FBL) but should begin evaluating HDL to plan a migration in the future, where applicable.

There are a few scenarios where HDL may not be recommended, and an exception may be considered, for both, existing and new Customers.

Situations where HCM Data Loader may not be recommended

  1. An existing customer using File-Based Loader who purchases additional test  environment that is created on R10.

  2. The customer must log an SR to change the default setting of Full to Limited to match other environments.

  3. Customers with PeopleSoft Integration.

  4. Customers with Taleo Integration via Taleo Connect Client (TCC) and File-Based Loader.

Migrating from File-Based Loader to HCM Data Loader

  1. Is File-Based Loader used for migration only? If so, once migration is complete, then HCM Data Loader could be considered.

  2. Is File-Based Loader used for ongoing integration? If so, then there will need to be rework of processes and a cutover decision.

  3. How are File-Based Loader data files generated? Whatever method is used for generating the File-Based Loader data files will need to be reworked to generate the correct HCM Data Loader format.

  4. The complexity of the integration will need to be taken into account to determine who does the rework of the extract mechanism.

  5. Are you loading objects outside of File-Based Loader and HCM Spreadsheet Data Loader (via SR requested scripts)?  If this is causing delays and issues related to lack of automation, then HCM Data Loader should be considered.

  6. Are there users who load data using HCM Spreadsheet Data Loader?  A move to HCM Data Loader in   R10 would disable this  

  7. Functionality, so it would probably be worth waiting for spreadsheet support.  HCM Data Loader migration should be treated as an implementation with a proper project plan. File-Based Loader GUID values can continue to be used with HCM Data Loader.  A process can be run to convert the File-Based Loader GUID into a source key that HCM Data Loader can recognize.

  8. HR spreadsheet loaders in the Data Exchange work area will not be available to use in conjunction with HCM Data Loader  

  9. HCM Data Loader and File-Based Loader cannot be used at the same time for objects supported by both.

  10. Payroll batch loader is still required for some payroll object loads.

11. Environment refresh will overwrite HCM Data Loader settings if the source environment uses File-Based Loader. You will

           have to follow the process again to enable HCM Data Loader and convert File-Based Loader GUIDs and source keys.    

     12. Once HCM Data Loader is enabled in a test environment, no additional File-Based Loader load testing will be possible

New Implementation Considerations

  1. Customers who have recently started implementing and have not yet gone live should consider switching to HCM Data Loader if their timelines can accommodate it.

  2. This will mitigate the need for conversion to HCM Data Loader later in the project lifecycle. Project plans should be reviewed to incorporate the migration to HCM Data Loader, taking into account:

  3. Training on the new tool

  4. Rework of the extract mechanism to get data in the HCM Data Loader format

  5. The need to test the migration and integration processes using HCM Data Loader instead of File Based Loader

  6. The need to fit in with major implementation milestones

Considerations for existing customers

  1. Existing live customers already using File-Based Loader and HCM Spreadsheet Data Loader should defer the switch to HCM Data Loader.

  2. Customers who are not yet live should evaluate whether to rework their implementation to use HCM Data Loader or continue using File-Based Loader and HCM Spreadsheet Data Loader.

  3. The main work involved in using File-Based Loader and HCM Data Loader is the extract of the data from a source system to the correct format ready for loading. Since this is not part of Oracle Fusion, Oracle does not provide a conversion process from File-Based Loader to HCM Data Loader.

  4. Oracle does provide the migration of File-Based Loader GUID values to the HCM Data Loader equivalent, which are referred to as source keys.

  5. Customers using Oracle Fusion Taleo Recruitment Out of the Box (OOTB) V1 Integration are not impacted.

  6. If you are using Taleo Connect Client and File-Based Loader or a hybrid with OOTB to integrate with Fusion, you will need to perform an evaluation and follow the steps to migrate to HCM Data Loader

HCM Data Loader Compatibility With File Based Loader

HCM Data Loader and File-Based Loader cannot be used at the same time for objects supported by both. Either of them should be picked for conversion.

The setting of the HCM Data Loader Scope parameter on the Configure HCM Data Loader page determines whether HCM Data Loader or File-Based Loader is used and controls the behavior of the loading tools. The default value of this parameter is Limited for existing customers. If you attempt to load data for a business object not supported in the Limited mode, your whole data set will fail.

Limited mode Only business objects not supported by HCM File-Based Loader can be loaded using HCM Data Loader. All objects that can use File-Based Loader must use File-Based Loader. Any objects that are not available via File-Based Loader should use HCM Data Loader.

Full mode HCM Data Loader is used for bulk-loading data into all supported business objects. HCM File Based Loader and HCM Spreadsheet Data Loader are disabled.

Important Note: You can switch from Limited mode to Full mode, but you cannot switch from Full mode to Limited mode. This is a one-time switch from File-Based Loader to HCM Data Loader.

Once you migrate to HCM Data Loader, HCM Spreadsheet Data Loader is also disabled because it relies on the File-Based Loader engine to load data to Oracle HCM Cloud. This restriction applies only to the spreadsheet loading that is launched from the Data Exchange work area. Other spreadsheet data loaders are not impacted by the uptake of HCM Data Loader.

Impact of upgrade to Release 10

HCM Data Loader will be Generally Available in R10 (also in Release 9 Patch Bundle 7 and above ) but there is no immediate requirement to migrate to HCM Data Loader.

HCM Data Loader and File-Based Loader cannot be used at the same time for objects supported by both.

On upgrade to Release 10 you will see the HCM Data Loader options available in the application but you should not use HCM Data Loader if you are an existing File-Based Loader customer until you have completed an evaluation of HCM Data Loader.

Important Note:

There are differences in file format and key structures.

Once the switch to HCM Data Loader has occurred, you will no longer have access to File-Based Loader or HCM Spreadsheet Data Loader.

If you have a requirement to load documents of record or areas of responsibility, then you can use HCM Data Loader in Limited mode with no impact on File-Based Loader or HCM Spreadsheet Data Loader, since these objects are not currently supported by File-Based Loader

Environment management considerations

If you are live with File-Based Loader and testing HCM Data Loader in a nonproduction environment, then you should plan your environment refresh (P2T) requests carefully.

When you request an environment refresh, the HCM Data Loader settings will be overwritten, and the environment will revert to the default Limited mode.

You will need to go through the same steps as before to switch back to HCM Data Loader. That is, you must convert File-Based Loader GUIDs to HCM Data Loader source keys and switch HCM Data Loader Scope to Full.

During the HCM Data Loader migration validation and testing, important testing considerations must be included in your planning.

HCM Data Loader in Full mode is not compatible with File-Based Loader; therefore, it is not possible to have an environment with both HCM Data Loader and File-Based Loader at the same time.

This will impact your ability to test File-Based Loader transactions in your nonproduction environment while you are in the process of validating HCM Data Loader.

Important Note: You will need to ensure that the HCM Data Loader enabled environment is not required for any File-Based Loader testing prior to setting the HCM Data Loader Scope to Full.

Migration Steps for moving from File Based Loader to Hcm Data Loader

It is not possible to move to HCM Data Loader for individual core objects on an incremental basis. It is a one-time migration and requires careful planning and preparation to ensure a smooth transition.

Choice of Keys

One of the most important decisions when considering the upgrade from File-Based Loader to HCM Data Loader is whether to continue to use the same key mechanism as is used in File-Based Loader (GUIDs) or whether to take advantage of the user key support that is available in HCM Data Loader.

User keys allow objects to be identified in HCM Data Loader using their natural key; for example, Job Code, Person Number, and so on.

File-Based Loader GUIDs have an equivalent in HCM Data Loader known as source keys. These are values that are defined in the source system and stored alongside the Oracle Fusion surrogate keys when objects are created in Oracle HCM Cloud. Source keys can be used to reference objects when loading related data or to identify specific objects when performing updates or deletes.

Within HCM Data Loader, each object can use different types of keys, so a decision needs to be made on an object-by-object basis to determine whether a user key or a source key will be used.

Conversion of GUIDs

In order to facilitate the upgrade from File-Based Loader to HCM Data Loader, a process is provided to migrate the File-Based Loader GUIDs to HCM Data Loader source system IDs. Regardless of whether user keys or source keys will be used, it is recommended that this process be run as the first step

Template Generation

Before reworking the export processes, you can download a template for each business object supported by HCM Data Loader. These templates take into account any flex-field structures that are already in place. By using the templates, you can accurately outline the shape of the data that needs to be generated by the reworked export processes.

Rework Of Export Processes

The main task required for migration to HCM Data Loader is the rework of the export process that generates the data for loading to Oracle HCM Cloud. This process needs to take into account the correct attributes for the HCM Data Loader objects as well as preparing the files in the format expected by HCM Data Loader.

The attached spreadsheet provides a mapping between the HCM Data Loader data file name, file discriminator, and attribute name to the HCM File-Based Loader data file and attribute name.

HCM Data Loader only supports files loaded via Oracle Web-Center Content. If customers are currently using SFTP, then the processes will need to be changed.
Similar to File-Based Loader, HCM Data Loader has a web service that can be used to invoke the HCM Data Loader processing.

Sample Screenshot (Mapping Sheet)

Offline Verification using HDLdi

The offline Data File Validator Tool (HDLdi) and used in the extract process to ensure that the data files being prepared are valid in terms of the data format. It also checks any business rules that apply to the data contained in the data file where other Oracle HCM Cloud data is not required as part of the validation.

HCM Data Loader Process Flow Diagram

HDL (Hcm Data Loader) Vs FBL (File Based Loader) Comparative Analysis (Top 10 Points)

Sample HDL Files

Worker.dat

Oracle Data Integrator Introduction

$
0
0

Objective: In this article we will see the Introduction and Architecture of Oracle Data Integrator(ODI).

 

Oracle Data Integrator (ODI) is a product from Sunopsis acquired by Oracle in 2006 and now part of Oracle

Fusion Middleware Family.

1. ODI is built on E-LT(Extract , Load and Transform) Architecture.

2. Oracle Data Integrator 10g (10.1.3.5.0) suite includes three products

(a) Oracle Data Integrator

(b) Oracle Data Quality and

(c)  Oracle Data Profiling

3. ODI uses Database as ETL (Extract, Transform & Load) engine thus eliminates requirement of proprietary ELT engine

4. Oracle Data Integrator Enterprise Edition (ODIEE) is combination of ODI (Oracle Data Integrator) and OWB (Oracle Warehouse Builder)

 

Oracle Data Integrator Architecture

Oracle Data Integrator (ODI) consists of following products:

1. Repository– is relational Databases to store objects used/configured or developed by ODI. There are two type of Repository Master Repository (one and only one) andWork Repository (one or more)

(a) Master Repository – There is only one Master repository and used to store security information, topology information (servers..) and versions of the Objects. All Modules (designer, operator, topology & security) have access to master repository.

(b) Work Repository– Work related objects (project objects) are stored in Work Repository like Models, Projects and run-time information. There could be multiple work repository per installation and all linked to single master repository. Work Repository is accessed by Designer/Operator Module and run time agent.

2. Graphical Modules

(a) Designer (designer.sh|bat)– All project development takes place in this module and this is the place where database and application metadata are imported and defined.

(b) Operator (operator.sh|bat) – usually to monitor production ODI instance and shows execution logs, rows processed and execution statistics

(c) Topology Manager (topology.sh|bat)– To register servers, schema and agents in master repository.

(d) Security Manager (security.sh|bat) – To manage user profiles and their access privileges.

3. Runtime Component / Scheduler Agent– scheduler agent coordinates execution of scenarios. Scheduler Agent retrieves code from execution repository and then requests database server, scripting engine or operating system server to execute that code.

4. Metadata Navigator (MN)– is Web (JSP/Servlet) application (available asoracledimn.war) that enables access to repository through Web Interface (Web Browser). MetaData Navigator (MN) requires Application Server and you deploy MN application oracledimn.war on pre-installed application server (Tom Cat, OAS, WebLogic). This is optional component.

 

ODI consists of following components:

1. ODI Studio – is a design time components which consists of Designer, Operator, Topology and Security Navigator. This is developer tool and used mainly by developers and administrators to develop and manage ODI . ODI studio is NOT required at run time.

2. Agents – This is run time component which connects to repository and executes the code . Agents also records execution time, logging, and messages for each execution. Agents are of two type a) Standalone Agents b) Java EE agents.

Both type of agents are mulct threaded java programs and can be configured for high availability. Java EE agents require WebLogic Server where as Standalone Agents run in their own JVM container (no application server is required for standalone agent)

3. ODI Repository– is database schema which contains configuration information, metadata, project scenario and execution logs. ODI repository consists of one Master Repository and multiple Work Repository.

(a) Master Repository – contains security information (users, profiles, access rights), topology definition (server , schemas, contexts, languages), versions and achieved objects.

(b) Work Repository – contains developed objects like Models, Projects and Scenario execution.

Execution Repository – This is work repository which contains only execution information like for production environment.

4. ODI Console– is a Web based User Interface (accessible via browser) which can be used to perform topology configuration, production operations and read access to ODI repository. ODI Console is a web application and can be deployed on J2EE application server like Oracle WebLogic Server.

5. Public Web Services– ODI comes with several run-time service like a) “Public Web Service” b) “Agent Web Service” which can be deployed and executed from J2EE Application Server like Oracle WebLogic Server. Agent Web Service can also be invoked from Standalone Agent.

(a) Public Web Service connects to ODI repository to retrieve list of contexts and scenarios in ODI

(b) Agent Web Service commands the ODI Agent (Standalone and Java EE) to start and monitor a scenario in ODI.

How-To Configure HCM Data Loader in Fusion Applications

$
0
0

How-To Configure HCM Data Loader in Fusion Applications

This article explains in detail about the steps required to enable your system to use HCM Data Loader tool ( for HCM data Migration ) from Legacy Applications to Fusion Applications.

It has been divided into various sections and the same are detailed below :

Fusionapplication Login Page:

The below screenshot shows a Fusion Application Home page. Click on the Fusion Applications Link ( you will receive the same from your Project Team Colleagues / IT Admin )

Application Version Details

Verify the application version

Navigation-: Click on any link from the navigator (E.g.: Setup and Maintenance) -> A new page opens up. On the top right hand side of the page you will find an arrow next to your login user name which populates below Settings and Actions list screenshots below :

Select ‘About This Page’. This will provide details of the Fusion application version (highlighted below)

User Roles:

Role required for using HCM Data Loader is Human Capital Management Integration Specialist

Navigation to check the User roles: Navigator-> My Account ->Current Roles

NAVIGATION: Configure HCM Data Loader

Login into Fusion applications-> Click on Navigator-> Setup and Maintenance -> All Tasks

In the search window, give name as ‘Configure HCM Data Loader’

Click on ‘Go to Task’    i.e.       Icon

NOTE: ‘Permitted’ field should be green checked. If it is red checked then you do not have required permission to view the settings.

Now you should be able to see all the PARAMETERS set for HCM Data Loader.

Configuration Parameter – HCM Data Loader Scope:

1. Full Enables the use of HCM Data Loader as the primary bulk inbound integration tool. File-based Loader usage is disabled.

2. Limited Enables the use of FBL as the primary bulk inbound integration tool and HDL only for objects not supported by FBL Not recommended to switch the tools for the same business objects intermittently

New release 10 customers will have a default scope of FULL.

Changing the parameter value:

•Customers can choose to move from LIMITED to FULL at ANYTIME (available via UI)

•Move from FULL to LIMITED requires Development Intervention (can cause issues)

What is HCM Data Loader

$
0
0

Introduction to Oracle Fusion HCM Data Loader

HCM Data Loader aka (also known as ) HDL is the next generation Data Loading Tool used in Fusion Applications.

Mostly used in all new implementation starting July 2015 this tool has tremendously advanced features compared to its predecessor FBL (File Based Loader).

In this article we would try to understand what HDL is and also a brief understanding of key concepts associated with the same.

Role required for using HCM Data Loader is Human Capital Management Integration Specialist

So without much ado let’s begin….

Major Enhancements Over FBL

  1. Bulk loading of HCM data from any source

  2. Data-migration or incremental updates

  3. Flexible, pipe-delimited file format

  4. Comprehensive bulk loading capabilities

  5. Automated and user managed loading

  6. Stage Table Maintenance

While the above 6 are the ones mostly stated and popularly advertised features I have a slightly different view and they are :

  1. Bulk Loading of HCM Data from any source

This point to me doesn’t seems valid as FBL also used to do the same thing and hence I will discard this

  1. Data Migration or incremental updates

FBL does the same but if you have say ‘N’ records for an employee and you want to add one more records you need to pass all N+1 records

Using HDL you would be required to just pass the (N+1) Th record so a major enhancement  

  1. Flexible, pipe-delimited file format

Available in FBL too hence discarded again.

  1. Comprehensive bulk loading capabilities

A very new advanced and enhanced feature.

  1. Automated and user managed loading

FBL can also be automated using web service call hence again discarded.

  1. Stage Table Maintainenace

This again was in FBL too hence discarded again.

So now we will primarily discuss about 2 points namely :

  1. Data Migration or Incremental Updates

  2. Comprehensive Bulk Loading Capabilities.

Data Migration Or Incremental Updates

We will take an example of various events in an individual’s life and co-relate the same as Data Transactions.Details of same explained below :

Hire an Employee

Ms. Sandra Mora is a lady who joins a company and becomes employee on 15 Jun 2003. She gets a unique identifier (Like SSN for US , PAN Card for India to uniquely identify a individual, similarly each company has a unique identifier to identify an employee this unique identifier is referred to as Employee Number ) no 12345.

Marital Status Change

She gets married on 21st Aug 2006 and undergoes following changes in her employee record:

  1. Title: Gets changed from Ms to Mrs.

  2. Last Name :  Her Last Name gets changed from “Mora” to “Bjork

  3. Email Address :Her email address gets changed from Sandra.mora@abc.net to Sandra.bjork@abc.net

She as a result of this change the previous record (Hire an Employee) gets end-dated on 20th Aug 2003 and a new record gets created on 21st Aug 2006

Middle Name Change

She gets a middle name added (she decided to have her husband’s first name added in her name on 16th July 2008) and a new record gets created.

  1. Middle Name: Gets changed from     to Albert.

FTE Capacity Change

Starting 25th Sep 2009 she starts working only on 4 days a week instead of initial 5 days a week so her Full Time Equivalent (FTE) gets changed from 1 ( Actual Days Worked in a Week (5) / Total Working Days in a Week (5) ) to New Value 0.8 ( Actual Days Worked in a Week (4) / Total Working Days in a Week (5))

  1. FTE Capacity Change: Gets changed from 1 to 0.8

Email Address Change

She undergoes a email address change as on 17th March 2012  

  1. Email Address: Gets changed from sandra.bjork@abc.net to sandra.baron@abc.net

Middle Name Change

She gets a middle name added (she decided to have her father’s first name added in her name on 12th Dec 2003) and a new record gets created.

  1. Middle Name: Gets changed from Albert to Frank.

The same details represented in a Tabular Format for easier understanding.

Comprehensive Bulk Loading Capabilities

Data Loader Process Flow Diagram

HCM Data Loader Supported Business Objects

Flexible Pipe-Delimited File Format

Automated Or User Managed Processing

Import and Load Data

Progress Icons

File Line Counts

Object Counts

Object Errors

Stage Table Maintainenance

HDL Feature Guidelines

While Performing Conversion : Step 1 -> Configure Source System Owner

There could be multiple sources from which data can be migrated to Fusion , hence a Source System Owner needs to be configured for unique identification of Legacy Data Source. Below screenshot shows how to configure the same.

While Performing Conversion: Step 2 -> Configure Hcm Data Loader

HCM Data Loader : Error Reports Generation Using Delivered HCM Extracts

As with all delivered HCM Extracts, it is recommended that you make a copy of the HCM Data Loader Data Set Summary extract and alter the output to your requirements.

Navigate to the Manage Extract Definitions task available from the Data Exchange work area.

Query the HCM Data Loader Data Set Summary extract.

You click the copy icon to copy the seeded extract, supply your name for the copied extract.

Once your copy is successfully created you can retrieve it by searching for it by name.  Click on the name in the Search Results to make your required changes

Implementation Decision Points

Last but not the least “Human Capital Management Integration Specialististhe Role Required to Perform Conversion.” . That brings me to end of the topic.

Thanks a lot for all your time .. Have a Nice Day!!!!!!!!!!!!!


How-To Create Calendar Data Type for a Presentation Variable in OTBI Analysis

$
0
0

Business Requirement

It is a common requirement to have a Date Parameter in Reports. Things become difficult if the Report is an OTBI report, and the expected Date Parameter is Date Picker. By default, what we get is a text field in which we need to ensure that the user input is exactly as expected by the program (generally we guide the user via a tooltip). But, for now, we have a workaround explained next.

By default whenever you create a presentation variable it gives the following options for user input:

But the requirement is to have 'Calendar' as a value in a dropdown list.

Steps:

  1. Select the Choice List Option:

  1. In 'Choice List Values' select 'All Column Values':

  1. Now select any attribute from the Subject Area of Date Type. For this example, let us choose Start Date from Person Folder (Workforce Management – Person Real Time). Choose Variable Data Type as 'Date':

  1. Click on OK.

  2. Edit the prompt again and 'Calendar' now appears as a User Input in the dropdown list:

  1. Save the changes.

  2. The Date Picker appears as desired.

It is a common requirement to have a Date Parameter in Reports. Things become difficult if the Report is an OTBI report, and the expected Date Parameter is Date Picker. By default, what we get is a text field in which we need to ensure that the user input is exactly as expected by the program (generally we guide the user via a tooltip). But, for now, we have a workaround explained next.

By default whenever you create a presentation variable it gives the following options for user input:

But the requirement is to have 'Calendar' as a value in a dropdown list.

Steps:

  1. Select the Choice List Option:

  1. In 'Choice List Values' select 'All Column Values':

  1. Now select any attribute from the Subject Area of Date Type. For this example, let us choose Start Date from Person Folder (Workforce Management – Person Real Time). Choose Variable Data Type as 'Date':

  1. Click on OK.

  2. Edit the prompt again and 'Calendar' now appears as a User Input in the dropdown list:

  1. Save the changes.

  2. The Date Picker appears as desired.

Oracle Goldengate Introduction

$
0
0

Objective: In this article we are going see the brief Introduction of Oracle Goldengate.

Oracle Goldengate is a strategic solution to real time data integration.

GoldenGate enables the exchange and manipulation of data at the transaction level among multiple, heterogeneous platforms across the enterprise.

It moves committed transactions with transaction integrity and minimal overhead on your existing infrastructure.

Its modular architecture gives you the flexibility to extract and replicate selected data records, transactional changes, and changes to DDL (data definition language) across a variety of topologies.

2016-02-18_12-20-39.png

  • If a data pump is not used, Extract must send the captured data operations to a remote trail on the target.

  • Data pump and replicate can perform data transformation.

  • You can use multiple Replicate processes with multiple Extract processes in parallel to increase throughput.

  • Collectors is by default started by Manager (dynamic Collector), or it can be manually created (static). Static Collector can accept connections from multiple Extracts.

 

Supported Source Databases:

  • c-tree

  • DB2 for Linux, UNIX, Windows

  • DB2 for z/OS

  • MySQL

  • Oracle

  • SQL/MX

  • SQL Server

  • Sybase

  • Teradata

 

Supported Target Databases:

  • c-tree

  • DB2 for iSeries

  • DB2 for Linux, UNIX, Windows

  • DB2 for z/OS

  • Generic ODBC

  • MySQL

  • Oracle

  • SQL/MX

  • SQL Server

  • Sybase

  • TimesTen

 

Supported Oracle databases:

  • Oracle 9i R2

  • Oracle 10g R1 and R2

  • Oracle 11g R1 and R2

  • Microsoft SQL Server 2008

  • IBM DB2 9.1, 9.5 and 9.7

  • Sybase 15.0 and 15.5

  • SQL/MX 2.3 and 3.1

 

Supported Operating Systems:

  • Windows Server 2008 R2

  • Windows Server 2008 with SP1+

  • Windows 2003 with SP2+/R2+

  • Windows 2003

  • Oracle Linux 6 (UL1+)

  • Oracle Linux 5 (UL3+)

  • Oracle Linux 4 (UL7+)

  • Red Hat EL 6 (UL1+)

  • Red Hat EL 5 (UL3+)

  • Red Hat EL 4 (UL7+)

  • Solaris 11

  • Solaris 10 Update 6+

  • Solaris 10 Update 4+

  • Solaris 2.9 Update 9+n

  • HP-UX 11i

  • AIX 7.1 (TL2+)

  • AIX 6.1 (TL2+)

  • AIX 5.3 (TL8+)

  • zOS 1.08 , 1.09 , 1.10 , 1.11 and 1.12

Fusion Applications Data Mapping - From HCM Screens To DB Field

$
0
0

It is a common question I have encountered in my previous implementations in which Consultants (Technical / Functional / Techno–Functional) or even Business Users (for UAT Purposes) are interested in understanding how in Fusion Applications the UI field getsmapped to their corresponding Data Field in Database.

Truth be told, there isn’t a formal documentation available, but I came across an MOSC document titled ‘Fusion Record Names.pdf’ ( https://community.oracle.com/docs/DOC-821238) shared by Prasanna and compiled by him for few of the objects related to HCM.

I have tried to present the same in an easier, readable format, but credit for the hard work goes to Prasanna.

Before starting, let us try to define / explain the terminologies used in this article. They are:   

  1. Object Name

These are the specific objects or Business Entities which are generally represented as a single page or multiple pages on UI. Some examples being Person, Person Documentation,

WorkRelationshipByPerson, Location, Jobs, JobFamily, Position, Business Units, Department, Grades, Grade Rates, Element Entry, to name a few.

  1. Search Records

These are the Record Types which are exposed on the UI pages to enable search (Each Page has a search button and that search is executed on a record type variable which is basically a UI Page). Few Examples being per_persons, per_all_people_f ,

per_person_names_f ,per_email_Addresses etc.

  1. Base Records

These are actual Base Records (actual DB Tables). Few examples being

PER_DRIVERS_LICENSE_TYPES, PER_DRIVERS_LICENSES, PER_PERSON_DLVRY_METHO

DS,PER_IMAGES ,PER_CITIZENSHIPS,PER_CONTACT_RELSHIPS_F, PER_PERSON_TYPE_USAGES_M ,PER_RELIGIONS ,PER_PHONES ,PER_NATIONAL_IDENTIFIERS, to name a few.

In Essence, based on the spreadsheet below, first you need to identify the Business Object (from Fusion UI), and then check all the corresponding tables mentioned in it, and look for the data field. In most cases, the names of the field (called LABEL / PROMPT in UI) correspond to similar names in the Database, but in case of any discrepancy, you may raise an SR to get the same clarified. You may even verify the same by entering some data in UI, and checking the same using SQL queries on the Database side (using BI Data Model in SaaS Environment).

This is not the complete list, but it is probably the most commonly used. I would try adding more once I collect more information on it.

CONTD......

CONTD......

Oracle Goldengate Installation

$
0
0

Objective: In the previous article, we learned about the Oracle goldengate Introduction.This article details the installation and configuration of Oracle GoldenGate v11.2.1.0.1 for Oracle 11g on Linux x86-64.

 

Step1: Installing Database

For Database Installation steps click here

 

Step2: Download Oracle Goldengate

2016-02-19_14-35-38.png

Download Oracle Goldengate from Oracle website.

2016-02-19_14-37-55.png

click on Download all to start downloading the goldengate file.

 

Now,

Create a GoldenGate OS user

 

[root@ggt1 ~]# useradd –G oinstall ggadmin

[root@ggt1 ~]# passwd ggadmin

Changing password for user ggadmin.

New UNIX password:

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

[root@ggt1 ~]#

 

Make the GoldeGate software home

 

[root@ggt1 ~]# cd /u01/app/oracle

[root@ggt1 oracle]# mkdir ggs ggs/11.2.0

[root@ggt1 oracle]# chown -R ggadmin:ggadmin ggs/

[root@ggt1 oracle]#

 

Set up Oracle Environment for the ggadmin user.

 

[ggadmin@ggt1 ~]$ cat env11g

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1

export ORACLE_SID=ggdb1

export GG_HOME=/u01/app/oracle/ggs/11.2.0

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$GG_HOME:$LD_LIBRARY_PATH

export PATH=GG_HOME:$ORACLE_HOME/bin:$PATH

[ggadmin@ggt1 ~]$

 

Copy the GoldenGate software to the GoldenGate software home and uncompress the file.

 

[ggadmin@ggt1 ~]$ cd $GG_HOME

[ggadmin@ggt1 11.2.0]$ cp ~/ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip .

[ggadmin@ggt1 11.2.0]$ unzip ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip

Archive:  ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip

inflating: fbo_ggs_Linux_x64_ora11g_64bit.tar

inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf

inflating: Oracle GoldenGate 11.2.1.0.1 README.txt

inflating: Oracle GoldenGate 11.2.1.0.1 README.doc

[ggadmin@ggt1 11.2.0]$ tar xvf fbo_ggs_Linux_x64_ora11g_64bit.tar

UserExitExamples/

UserExitExamples/ExitDemo_more_recs/

UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.HPUX

UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.SOLARIS

UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.LINUX

UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.AIX

< cut >

server

sqlldr.tpl

tcperrs

ucharset.h

ulg.sql

usrdecs.h

zlib.txt

[ggadmin@ggt1 11.2.0]$

Next, using GGSCI create the GoldGate working directories.

 

[ggadmin@ggt1 ~]$ cd $GG_HOME

[ggadmin@ggt1 11.2.0]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle

Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO

Linux, x64, 64bit (optimized), Oracle 11g on Apr 23 2012 08:32:14

GGSCI (ggt1.odlabs.net) 1> create subdirs

Creating subdirectories under current directory /u01/app/oracle/ggs/11.2.0

Parameter files            /u01/app/oracle/ggs/11.2.0/dirprm: already exists

Report files               /u01/app/oracle/ggs/11.2.0/dirrpt: created

Checkpoint files           /u01/app/oracle/ggs/11.2.0/dirchk: created

Process status files       /u01/app/oracle/ggs/11.2.0/dirpcs: created

SQL script files           /u01/app/oracle/ggs/11.2.0/dirsql: created

Database definitions files /u01/app/oracle/ggs/11.2.0/dirdef: created

Extract data files         /u01/app/oracle/ggs/11.2.0/dirdat: created

Temporary files            /u01/app/oracle/ggs/11.2.0/dirtmp: created

Stdout files               /u01/app/oracle/ggs/11.2.0/dirout: created

GGSCI (ggt1.odlabs.net) 2> exit

[ggadmin@ggt1 11.2.0]$

 

Create a database user and tables pace for GoldenGate

 

SQL> create tablespace ogg_data

 2  datafile '/u01/app/oracle/oradata/ggdb1/oggdata01.dbf' size 300M;      

Tablespace created.

SQL> create user ogg identified by password

 2  default tablespace ogg_data

 3  temporary tablespace temp;

User created.

SQL>

Next, grant the following privilege to the GoldenGate user.

 

SQL> grant create session to ogg;

Grant succeeded.

SQL> grant alter session to ogg;

Grant succeeded.

SQL> grant select any dictionary to ogg;

Grant succeeded.

SQL> grant create table to ogg;

Grant succeeded.

SQL> grant execute on dbms_flashback to ogg;

Grant succeeded.

SQL> grant flashback any table to ogg;

Grant succeeded.

SQL> grant select any transaction to ogg;

Grant succeeded.

SQL> grant select on v_$database to ogg;

Grant succeeded.

SQL>                           

Finally, Oracle GoldenGate requires supplemental logging to be enabled at the database level. You can verify that supplemental logging is enabled at the database level with the following query.

 

SQL> select supplemental_log_data_min   

 2  from v$database;

SUPPLEME

--------

NO

SQL>

The output must be YES or IMPLICIT. If the result is NO, as the SYS user, issue the following alter database to enable minimal supplemental logging at the database level. Be sure to switch the log file after adding supplemental logging.

 

SQL> alter database add supplemental log data;

Database altered.

SQL> alter system switch logfile;

System altered.

SQL> select supplemental_log_data_min

 2  from v$database;

SUPPLEME

--------

YES

SQL>

Oracle Goldengate Installation

$
0
0

Objective: In the previous article, we learned about the Oracle goldengate Introduction. This document details the installation and configuration of Oracle GoldenGate v11.2.1.0.1 for Oracle 11g on Linux x86-64.

 

Step1: Installing Database

For Database Installation steps click here

 

Step2: Download Oracle Goldengate

2016-02-19_14-35-38.png

Download Oracle Goldengate from Oracle website.

2016-02-19_14-37-55.png

click on Download all to start downloading the goldengate file.

 

Now,

Create a GoldenGate OS user

 

[root@ggt1 ~]# useradd –G oinstall ggadmin

[root@ggt1 ~]# passwd ggadmin

Changing password for user ggadmin.

New UNIX password:

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

[root@ggt1 ~]#

Make the GoldeGate software home

 

[root@ggt1 ~]# cd /u01/app/oracle

[root@ggt1 oracle]# mkdir ggs ggs/11.2.0

[root@ggt1 oracle]# chown -R ggadmin:ggadmin ggs/

[root@ggt1 oracle]#

Set up Oracle Environment for the ggadmin user.

 

[ggadmin@ggt1 ~]$ cat env11g

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1

export ORACLE_SID=ggdb1

export GG_HOME=/u01/app/oracle/ggs/11.2.0

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$GG_HOME:$LD_LIBRARY_PATH

export PATH=GG_HOME:$ORACLE_HOME/bin:$PATH

[ggadmin@ggt1 ~]$

Copy the GoldenGate software to the GoldenGate software home and uncompress the file.

 

[ggadmin@ggt1 ~]$ cd $GG_HOME

[ggadmin@ggt1 11.2.0]$ cp ~/ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip .

[ggadmin@ggt1 11.2.0]$ unzip ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip

Archive:  ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip

inflating: fbo_ggs_Linux_x64_ora11g_64bit.tar

inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf

inflating: Oracle GoldenGate 11.2.1.0.1 README.txt

inflating: Oracle GoldenGate 11.2.1.0.1 README.doc

[ggadmin@ggt1 11.2.0]$ tar xvf fbo_ggs_Linux_x64_ora11g_64bit.tar

UserExitExamples/

UserExitExamples/ExitDemo_more_recs/

UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.HPUX

UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.SOLARIS

UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.LINUX

UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.AIX

< cut >

server

sqlldr.tpl

tcperrs

ucharset.h

ulg.sql

usrdecs.h

zlib.txt

[ggadmin@ggt1 11.2.0]$

Next, using GGSCI create the GoldGate working directories.

 

[ggadmin@ggt1 ~]$ cd $GG_HOME

[ggadmin@ggt1 11.2.0]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle

Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO

Linux, x64, 64bit (optimized), Oracle 11g on Apr 23 2012 08:32:14

GGSCI (ggt1.odlabs.net) 1> create subdirs

Creating subdirectories under current directory /u01/app/oracle/ggs/11.2.0

Parameter files            /u01/app/oracle/ggs/11.2.0/dirprm: already exists

Report files               /u01/app/oracle/ggs/11.2.0/dirrpt: created

Checkpoint files           /u01/app/oracle/ggs/11.2.0/dirchk: created

Process status files       /u01/app/oracle/ggs/11.2.0/dirpcs: created

SQL script files           /u01/app/oracle/ggs/11.2.0/dirsql: created

Database definitions files /u01/app/oracle/ggs/11.2.0/dirdef: created

Extract data files         /u01/app/oracle/ggs/11.2.0/dirdat: created

Temporary files            /u01/app/oracle/ggs/11.2.0/dirtmp: created

Stdout files               /u01/app/oracle/ggs/11.2.0/dirout: created

GGSCI (ggt1.odlabs.net) 2> exit

[ggadmin@ggt1 11.2.0]$

Create a database user and tables pace for GoldenGate

 

SQL> create tablespace ogg_data

 2  datafile '/u01/app/oracle/oradata/ggdb1/oggdata01.dbf' size 300M;      

Tablespace created.

SQL> create user ogg identified by password

 2  default tablespace ogg_data

 3  temporary tablespace temp;

User created.

SQL>

Next, grant the following privilege to the GoldenGate user.

 

SQL> grant create session to ogg;

Grant succeeded.

SQL> grant alter session to ogg;

Grant succeeded.

SQL> grant select any dictionary to ogg;

Grant succeeded.

SQL> grant create table to ogg;

Grant succeeded.

SQL> grant execute on dbms_flashback to ogg;

Grant succeeded.

SQL> grant flashback any table to ogg;

Grant succeeded.

SQL> grant select any transaction to ogg;

Grant succeeded.

SQL> grant select on v_$database to ogg;

Grant succeeded.

SQL>                           

Finally, Oracle GoldenGate requires supplemental logging to be enabled at the database level. You can verify that supplemental logging is enabled at the database level with the following query.

 

SQL> select supplemental_log_data_min   

 2  from v$database;

SUPPLEME

--------

NO

SQL>

The output must be YES or IMPLICIT. If the result is NO, as the SYS user, issue the following alter database to enable minimal supplemental logging at the database level. Be sure to switch the log file after adding supplemental logging.

 

SQL> alter database add supplemental log data;

Database altered.

SQL> alter system switch logfile;

System altered.

SQL> select supplemental_log_data_min

 2  from v$database;

SUPPLEME

--------

YES

SQL>

Viewing all 930 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>