Quantcast
Channel: Apps2Fusion Articles
Viewing all 930 articles
Browse latest View live

How to validate XML against XSD in java

$
0
0

 

Objective:
In the previous article Method Overriding in Java, Method overriding is explained in detail with example. In this article, we will learn how to validate XML against XSD in java.

How to validate XML against XSD in java:
Java XML Validation API can be used to validate XML against an XSD. javax.xml.validation.Validator class is used in this program to validate xml file against xsd file.Here are the sample XSD and XML files used.

Employee.xsd

<?xml version="1.0" encoding="UTF-8"?>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.journaldev.com/Employee"
xmlns:empns="http://www.journaldev.com/Employee" elementFormDefault="qualified">
<element name="empRequest" type="empns:empRequest"></element>
<element name="empResponse" type="empns:empResponse"></element>
<complexType name="empRequest">
<sequence>
<element name="id" type="int"></element>
</sequence>
</complexType>
<complexType name="empResponse">
<sequence>
<element name="id" type="int"></element>
<element name="role" type="string"></elemen>
<element name="fullName" type="string"></element>
</sequence>
</complexType>
</schema>

Notice that above XSD contains two root element and namespace also, I have created two sample XML file from XSD using Eclipse.


EmployeeRequest.xml

<?xml version="1.0" encoding="UTF-8"?>

<empns:empRequest xmlns:empns="http://www.journaldev.com/Employee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.journaldev.com/Employee Employee.xsd ">

EmployeeResponse.xml

<?xml version="1.0" encoding="UTF-8"?>

<empns:empResponse xmlns:empns="http://www.journaldev.com/Employee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.journaldev.com/Employee Employee.xsd ">
<empns:id>1</empns:id>
<empns:role>Developer</empns:role>
<empns:fullName>Pankaj Kumar</empns:fullName>
</empns:empResponse>

Here is another XML file that doesn’t confirms to the Employee.xsd.


employee.xml

<?xml version="1.0"?>
<Employee>
<name>Pankaj</name>
<age>29</age>
<role>Java Developer</role>
<gender>Male</gender>
</Employee>

Here is the program that is used to validate all three XML files against the XSD. The validateXMLSchemamethod takes XSD and XML file as argument and return true if validation is successful or else returns false.

XMLValidation.java

package com.journaldev.xml;
import java.io.File;
import java.io.IOException;
import javax.xml.XMLConstants;
import javax.xml.transform.stream.StreamSource;
import javax.xml.validation.Schema;
import javax.xml.validation.SchemaFactory;
import javax.xml.validation.Validator;
import org.xml.sax.SAXException;
public class XMLValidation {
public static void main(String[] args) {

     System.out.println("EmployeeRequest.xml validates against Employee.xsd? "+validateXMLSchema("Employee.xsd", "EmployeeRequest.xml"));

     System.out.println("EmployeeResponse.xml validates against Employee.xsd? "+validateXMLSchema("Employee.xsd", "EmployeeResponse.xml"));

     System.out.println("employee.xml validates against Employee.xsd? "+validateXMLSchema("Employee.xsd", "employee.xml"));

     }

   public static boolean validateXMLSchema(String xsdPath, String xmlPath){

       try {

           SchemaFactory factory =

                   SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);

           Schema schema = factory.newSchema(new File(xsdPath));

           Validator validator = schema.newValidator();

           validator.validate(new StreamSource(new File(xmlPath)));

       } catch (IOException | SAXException e) {

           System.out.println("Exception: "+e.getMessage());

           return false;

       }

       return true;

   }

}


Output of the above program is:

EmployeeRequest.xml validates against Employee.xsd? true

EmployeeResponse.xml validates against Employee.xsd? true

Exception: cvc-elt.1: Cannot find the declaration of element 'Employee'.

employee.xml validates against Employee.xsd? false

Benefit of using java XML validation API is that we don’t need to parse the file and there are no third party APIs used.






WHAT Is SQL

$
0
0

What IS SQL?

SQL Stands for Structured Query Language.

Structured = In a Processed Way,Proper Methodical Approach

Query = Asking Questions

Language = Mode / Medium of Communication

So SQL = Structured + Query + Language

             = (In a Processed Way, Proper Methodical Approach) + (Asking Questions) + (Mode / Medium of Communication)

After a rearrangement if we try to construct a line it could be:

  1. Mode / Medium of Communication Via Asking Questions

  2. Mode / Medium of Asking Questions in a proper methodical approach.

Out of the above two the option 2 sounds better hence I will choose 2.

So SQL now stands for Mode / Medium of Asking Questions in a proper methodical approach.

Now this brings us to the next question.

Asking Questions . But Questions are asked by some-one (Individual 1) to Some-One (Individual 2)

So in SQL WHO Asks the Question and To WHOM

WHO = Business / Business User / Technical Consultant

WHOM = Database / ERP System / Any system Holding Data / Datawarehouse

So we come to conclusion that via SQL

Business / Business User / Technical Consultant asks questions to Data Storing System (Commonly referred to as Database) in a proper methodical way (following specific syntax)

Graphical Representation

 



Database Table:

The above data gets stored into the Database in a Structure Called Table and it looks like as shown below :


Database Column

Each Database Table is made up of one or more vertical rows typically referred to as Columns.


In the above example we have following columns

  1. Employee No

  2. Employee Name

  3. Position

  4. Department




SQL Example

Say a Technical Consultant Adam has been asked by CEO of ABC Company to give him the details of all employees . The Output would be fetched via a SQL.

Sql Syntax

So if we want to get the data of all employees we will have a SQL like :

Sql 1 : Get All Data

SELECT * FROM EMPLOYEES; ---------------------------------------------------------------------------- (a)

                OR

SELECT ALL FROM EMPLOYEES;------------------------------------------------------------------------( b )

                OR

SELECT EMPLOYEE_NO,

              EMPLOYEE_NAME,

              POSITION,

              DEPARTMENT

FROM    EMPLOYEES; -------------------------------------------------------------------------------------- ( c )

                    OR

SELECT E.ALL FROM EMPLOYEES E; ------------------------------------------------------------------( d )

                    OR

SELECT E.* FROM EMPLOYEES E ;  ------------------ ( e )

OR

SELECT E.EMPLOYEE_NO as “Employee No”,

E.EMPLOYEE_NAME as “Employee Name”,

E.POSITION                  as “Position”,

E.DEPARTMENT         as “Department”

FROM EMPLOYEES E                                                                ----------------------------------------- ( f )

Note :- Here in ( c ) we have used ‘E’ for ‘Employees’ which is an alias. Like we have Names (Dinesh) and pet-name (Dinu). So we can refer the to the TABLE with either Real Name ( Employees ) or with Pet-Name ( E )


WHERE CLAUSE

Whenever we want to Restrict Data in SQL we need to use where clause. In this example we have 3 department while we are interested to get data for ABC Sales so SQL will be :

SQL 2 : SQL with Where Clause

 

SELECT * FROM EMPLOYEES WHERE DEPARTMENT = ‘ABC Sales’; ------------------------------------------------------------- (a)


                OR

SELECT ALL FROM EMPLOYEES WHERE DEPARTMENT = ‘ABC Sales’;------------------------------------------------------------( b )

                OR

SELECT EMPLOYEE_NO,

              EMPLOYEE_NAME,

              POSITION,

              DEPARTMENT

FROM    EMPLOYEES

WHERE DEPARTMENT = ‘ABC Sales’; ------------------------------------------------------------------------------------------------------- ( c )

                    OR

SELECT E.ALL FROM EMPLOYEES E WHERE DEPARTMENT = ‘ABC Sales’------------------------------------------------------ ( d )

                    OR

SELECT E.* FROM EMPLOYEES E WHERE E.DEPARTMENT = ‘ABC Sales’ ;  ------------------ ( e )

OR

SELECT E.EMPLOYEE_NO as “Employee No”,

E.EMPLOYEE_NAME as “Employee Name”,

E.POSITION                  as “Position”,

E.DEPARTMENT         as “Department”

FROM EMPLOYEES E     

WHERE E.DEPARTMENT = ‘ABC Sales’                                                           ----------------------------------------- ( f )




How-To Create a Custom BIP Report In Fusion Applications

$
0
0

BusinessRequirement

Oracle has already delivered a lot of BIP Reports in the Fusion Instance, However during a course of implementation it is a common requirement to develop new ones.

Content

There are few common steps involved they are :

  1. Create a SQL Query which comprises of DB Tables ( that exist in Fusion Schema).

We have a SQL Query which is as below :

select  A.person_number,

       A.full_name,

       A.legal_employer,

       A.business_unit,

       A.department,

       A.absence_plan_name,

       A.adjustment_type,

       A.adjustment_reason,

       A.value,

       A.procd_date accrual_entry_date

from

(

select apaed.per_accrual_entry_dtl_id,

      apaed.per_accrual_entry_id,

      apaed.enterprise_id,

      apaed.value,

      apaed.type,

      apaed.created_by,

      apaed.creation_date,

      apaed.last_updated_by,

      apaed.last_update_date,

      apaed.last_update_login,

      apaed.person_id,

      apaed.pl_id,

      apaed.procd_date,

      apaed.per_event_id,

      apaed.legal_employer_id,

      apaed.assignment_id,

      apaed.per_absence_entry_id,

      apaed.per_plan_enrt_id,

      apaed.work_term_asg_id,

      DECODE(apaed.type,'ADJOTH',ADD_MONTHS(procd_date,6),NULL) proposed_expiration_date,

      papf.person_number,

      ppnf.full_name,

      paam.organization_id,

      dept.name  department,

      paam.legal_entity_id,

      legal_employer.classification_code         legal_emp_classification_code,

      legal_employer.name                        legal_employer,

      paam.business_unit_id,

      business_unit.classification_code          bu_classification_code,

      business_unit.name                         business_unit,

      absence_plan.name        absence_plan_name,

      flvt.meaning             adjustment_type,

    adj_reason.meaning            adjustment_reason

 FROM anc_per_acrl_entry_dtls apaed

 JOIN fnd_lookup_values_tl flvt

      ON (flvt.lookup_type = 'ANC_ACCRUAL_ENTRY_TYPE'

      AND flvt.lookup_code = apaed.type

      AND flvt.language = 'US')

 JOIN per_all_people_f papf

      ON (apaed.person_id = papf.person_id

      AND TRUNC(sysdate) BETWEEN papf.effective_start_date and papf.effective_end_date)

 JOIN per_person_names_f ppnf

      ON (ppnf.name_type = 'GLOBAL'

      AND ppnf.person_id = apaed.person_id

      AND TRUNC(SYSDATE) BETWEEN ppnf.effective_start_date and ppnf.effective_end_date)

 JOIN per_all_assignments_m paam

      ON (paam.assignment_id = apaed.assignment_id

      AND paam.person_id = apaed.person_id

      AND paam.primary_assignment_flag = 'Y'

      AND paam.assignment_type = 'E'

      AND TRUNC(SYSDATE) between paam.effective_start_date and paam.effective_end_date)

LEFT OUTER JOIN

(SELECTflvt1.lookup_code,

flvt1.meaning

FROMfnd_lookup_values_tl flvt1

WHEREflvt1.lookup_type = 'ANC_ABS_PLAN_OTHER_REASONS'

AND flvt1.language = 'US'

) adj_reason

ON (apaed.adjustment_reason = adj_reason.lookup_code)

 LEFT OUTER JOIN

     ( SELECT hauft.organization_id,

              hauft.NAME

         FROM HR_ORG_UNIT_CLASSIFICATIONS_F houcf,

              HR_ALL_ORGANIZATION_UNITS_F haouf,

              HR_ORGANIZATION_UNITS_F_TL hauft

        WHERE haouf.ORGANIZATION_ID = houcf.ORGANIZATION_ID

          AND haouf.ORGANIZATION_ID = hauft.ORGANIZATION_ID

          AND haouf.EFFECTIVE_START_DATE BETWEEN houcf.EFFECTIVE_START_DATE AND houcf.EFFECTIVE_END_DATE

          AND hauft.LANGUAGE = 'US'

          AND hauft.EFFECTIVE_START_DATE = haouf.EFFECTIVE_START_DATE

          AND hauft.EFFECTIVE_END_DATE = haouf.EFFECTIVE_END_DATE

          AND houcf.CLASSIFICATION_CODE = 'DEPARTMENT'

          AND TRUNC(SYSDATE) BETWEEN hauft.effective_start_date AND hauft.effective_end_date

      ) dept

      ON (paam.organization_id = dept.organization_id)

 LEFT OUTER JOIN

      (SELECT hauft.organization_id,

              hauft.NAME,

              houcf.classification_code

         FROM HR_ORG_UNIT_CLASSIFICATIONS_F houcf,

              HR_ALL_ORGANIZATION_UNITS_F haouf,

              HR_ORGANIZATION_UNITS_F_TL hauft

        WHERE haouf.ORGANIZATION_ID = houcf.ORGANIZATION_ID

          AND haouf.ORGANIZATION_ID = hauft.ORGANIZATION_ID

          AND haouf.EFFECTIVE_START_DATE BETWEEN houcf.EFFECTIVE_START_DATE AND houcf.EFFECTIVE_END_DATE

          AND hauft.LANGUAGE = 'US'

          AND hauft.EFFECTIVE_START_DATE = haouf.EFFECTIVE_START_DATE

          AND hauft.EFFECTIVE_END_DATE = haouf.EFFECTIVE_END_DATE

          AND houcf.CLASSIFICATION_CODE = 'HCM_LEMP'

          AND TRUNC(SYSDATE) BETWEEN hauft.effective_start_date AND hauft.effective_end_date

       ) legal_employer

       ON (paam.legal_entity_id = legal_employer.organization_id)

 LEFT OUTER JOIN

      (SELECT hauft.organization_id business_unit_id,

              hauft.NAME,

              houcf.classification_code

         FROM HR_ORG_UNIT_CLASSIFICATIONS_F houcf,

              HR_ALL_ORGANIZATION_UNITS_F haouf,

              HR_ORGANIZATION_UNITS_F_TL hauft

        WHERE haouf.ORGANIZATION_ID = houcf.ORGANIZATION_ID

          AND haouf.ORGANIZATION_ID = hauft.ORGANIZATION_ID

          AND haouf.EFFECTIVE_START_DATE BETWEEN houcf.EFFECTIVE_START_DATE AND houcf.EFFECTIVE_END_DATE

          AND hauft.LANGUAGE = 'US'

          AND hauft.EFFECTIVE_START_DATE = haouf.EFFECTIVE_START_DATE

          AND hauft.EFFECTIVE_END_DATE = haouf.EFFECTIVE_END_DATE

          AND houcf.CLASSIFICATION_CODE = 'FUN_BUSINESS_UNIT'

          AND TRUNC(SYSDATE) BETWEEN hauft.effective_start_date AND hauft.effective_end_date

       ) business_unit       

       ON (paam.business_unit_id = business_unit.business_unit_id)

        LEFT OUTER JOIN

       (SELECT aapf.absence_plan_id,

               aapft.NAME

          FROM anc_absence_plans_f_tl aapft,

               anc_absence_plans_f    aapf

         WHERE aapft.absence_plan_id = aapf.absence_plan_id

           AND aapf.plan_status = 'A'   -- added to pick only Active Absence Plans

           AND trunc(SYSDATE) BETWEEN aapf.effective_start_date AND aapf.effective_end_date

           AND trunc(SYSDATE) BETWEEN aapft.effective_start_date AND aapft.effective_end_date

           AND aapft.language = 'US'

       ) absence_plan

       ON  apaed.pl_id  = absence_plan.absence_plan_id

       where pl_id = absence_plan.absence_plan_id

and apaed.value <> 0    

order by apaed.person_id,apaed.procd_date asc

) A

where person_number = nvl(:pPersonNumber,person_number)

and   legal_employer = nvl(:pLegalEmployer,legal_employer)

and   business_unit = nvl(:pBusinessUnit,business_unit)

and   procd_date >=  nvl(:pCalculationDate,procd_date)

and   department = nvl(:pDepartment,department)

and   full_name = nvl(:pPersonName,full_name)

and   absence_plan_name = nvl(:pAbsencePlanName,absence_plan_name)


Navigate to the screen as shown:


Under Published Reporting -> Data Model

Create a New Data Set (of SQL Query type) as shown in Screenshot below:

Give a Name to Data Set (for this example say PersonAbsenceAccrualEntryDetails_ds):

You would need to take special care while selecting Data Source (Logic Below):

  1. If you are Building Finance reports use : ApplicationDB_FSCM

  2. If you are Building HCM reports use : ApplicationDB_HCM

  3. If you are Building CRM Reports use : ApplicationDB_CRM

For this example we use ApplicationDB_HCM

Depending on the number of Parameters( Bind Variables) used a Popup window will appear as :

Click OK. Give a Name to the Parameters as shown below

Parameter Details :

Data Model is created. Now we need to check the data retrieved.




. Create List of Values for Parameters.

List OF VALUES

LOV SQL Section

This Section shows all the SQL which are used for LOV Creation.

Legal Employer LOV SQL

Business Unit LOV SQL

Department LOV SQL

Absence Plan LOV SQL

Person Name LOV SQL


View Data :

Click on ‘Save As Sample Data’:

Create Report

                                                                                                    Click on Create Report


Click Next and Follow Train Stops :

Create Table

Drag and Drop fields and the Final Report output will look like :


R12 - Article 2 - Multiple organisations

$
0
0

This article is the continuation of Multiple Organisation.

Important concepts in R12 – Finance module – Multi ORG

Inventory Organisation:

It is the lowest level. It is the physical manufacturing company which tracks inventory and balances. (Outgoing and Incoming) It is used in supply chain. Inventory is a warehouse or a store where all the materials are kept. It is linked to the operating unit.

To summarize on the top would be the business group then ledgers and each ledger would have multiple legal entities assigned to it under which operating units are assigned and then inventory organisation. Inventory organisation is a place where materials are procured and materials are received and includes sales. 

Benefits of multiple organisations:

Supports number of organisations, under a single installation, even if they use different ledgers.

Supporting Flexible Organisation Models

Data access is secured, allowing users to access relevant information only. It could be done through a single instance. Though it is shared data for each operating unit it would be secure.

Multi Org Model or Structure:

The multi org model provides a hierarchy that dictates how transactions flow through different business organisations and how they interact with each other.     



Any company that is going to use oracle has to define the above values. Business group is at the top most level where the HRMS information is stored. Under business groups there can be multiple SOBs (Set of Books). In normal practice each country would have its own set of books, (chart of accounts, currency, calendars and combinations). Then there could be multiple legal entities which are legal reporting companies, which would have its own registration number.

For Example, in UK, for the same customer there could be multiple legal operations across. Operation units include all AP and AR transactions. It comprises all sub ledger entries are done at the operating unit level. Operating unit is a critical concept to understand in the accounting prospective.  All sub ledger accounting happens at the OU level.

At the last is the inventory organisation, where the material stock or inventory is tracked. Each of these has a parent child flow. Operating units can have multiple inventory orgs. One inventory orgs cannot be assigned to multiple operating units. One operating unit can be linked to only one legal entity. under one legal entity there could be multiple operating units. There could be multiple set of books and once the legal entity is assigned to a set of books, it cannot be assigned to another set of books.

The above was in the previous version; the same thing is there in the release 12. It does not have set of books and it is replaced by ledger




This is the overall architecture.

The reason for the inventory organisation being under the operating unit could be explained with an example. For example there is an operating unit in USA – it could have multiple warehouses for raw materials and then for finished goods etc but it comes under the operating unit in USA or is associated with it.

Release 11 versus Release 12:

The multi org concepts and structure remains the same in release 12. Differences relating to the setups are as follows.

  • Set of books are replaced by ledgers in release 12.

  • Legal entities are created using accounting setup manager or legal entity manager/ configurator in release 12.

  • In the previous version the option was to define the legal entity in the hrms level and then it was linked up to the organisation, In release 12 it is all a part of the General ledger.

  • Operating units can also be created in accounting setup manager itself in release 12.

Important points in the oracle user guide:

  • Responsibility:

One of the important concepts in oracle is responsibility. It is the access given to the user to do the entries.

For example there are three operating units in India, USA and China. It would be required that Indian users should be able to enter data using Indian related responsibility, like India payable and India receivable. The Indian user should not be able to see US data, this restriction in oracle is through responsibility and the access profiles set to the responsibility. Responsibility matrix should be defined very carefully before the structure is designed - The AP and AR modules that have to be used and they probably have multiple level. (Like a manager level responsibility, inquiry level responsibility, entry level responsibility) Each module and combination of the level is to be defined and is to be linked to the operating unit used. So for a user in India though the server is shared, (it could have the US data and UK data and Indian data), to ensure the Indian views only the Indian data, a UK user would only be able view UK data so that access is granted to a responsibility, so each user based on the country or location, based on how the business rules are defined will have to have that many responsibilities defined. A person could be in accounts payable activity making invoices to suppliers and other person may be having only view access where he could only view the data. A manager doesn’t have to enter anything and wants to view reports, so it would be an inquiry responsibility and may have an entry responsibility. A lead the one entering may not be able to validate the information and the next level may validate. Basis of this the number of responsibilities could be defined. So a single instance having access to all units but based on the user, the access could be defined. The access is a responsibility. Responsibilities are attached to the users.

User is the person logging in and should be able to see only the responsibilities assigned to him. So responsibility is one of the key terms.

Responsibility determines which operating units you can access when oracle applications are used.

  • Operating units are not associated with legal entities. Operating units are assigned to ledgers and a default legal context.

To explain the above point operating units are linked with the ledger but it has to have a default legal context which means a default legal entity should be placed here. So if there are multiple legal entities, there should that many operating units defined and attached to them. For any company to go live there should be one each so one legal entity and one operating unit has to be there as the minimum requirement for going live in oracle. So there should be a minimum of one ledger and one legal entity and one operating unit if you are going live with the finance module of oracle. – AP AR etc. Inventory organisation is required only if the inventory module is used, - supply chain.

As compared with R11 in R12 multiple operating units can be assigned to a single responsibility.

(Multi org access control), here the security profile has to be attached to the particular user and that user could enter through a single responsibility for multiple operating units invoices and payments. MOAC security profile could be set at the user level or the responsibility level, depends on the requirement.

  • To use multiple organisations, you must define an accounting setup with at least one legal entity.

R 12 - Article 3 - Multiple organisations

$
0
0

Important points stated in the Oracle User Guide:

  • Operating units can be defined from the Define organisation window in oracle HRMS or from Accounting Setup Manager in General Ledger.

There are two options, one in the organisation window, in the HRMS responsibility it is defined or it in the General ledger responsibility as well. In the previous version it was only from the HRMS or the inventory organisation.

For fresh installation, Oracle Applications provides a predefined business group, setup business group. It is recommended that the predefined business group is modified rather than defining a new one.

Which means the name could be changed for the predefined business group, following the naming convention of the organisation. EG: company name_BG and then the settings could be modified. So there is a predefined business group given at the time of doing the configuration. If there is a single business group, the predefined business group could be renamed and used. If there are multiple business groups, then we have to define separately for each business group.

Example: Some companies may want a separate HR, business group in the HRMS level, in order to have a different business group, to have different payroll and HRMS mapped across countries, a multiple business group concept has to be used.  For a single business group in the centralised HR and the employee business group numbering should be standard across, in different countries then there would be a single business group

  • HRMS does not have to do anything with the operating units; HRMS would be interested in knowing the payroll information or employee data. So employees are mapped at the business group level. Operating unit is for the accounting purposes. AP and AR is at the operating unit level whereas HRMS work at the BG level, the payroll or HR transaction employees are mapped at the HRMS level. They don’t get linked to the operating unit level respectively.

Example: If the Head quarter is in USA, HRMS is compiled in USA but it is split across multiple organisations. For India there could be a separate business group, for USA a separate one and for UK separate.

Pre requisites for setting Multi org in Release 12

  • Access to define organisations and define legal entity configurator pages, or the accounting setup manager page.

So the organisation has to be defined first and then the HR organisation, then the ledger and legal entity. In order to achieve that there should be access to GL, access to purchasing for inventory and HRMS for business group.

System administrator is a known concept in oracle. The system administrator grants responsibility access, profile options, what needs to setup, what needs to be access to what. All this is the responsibility of the system administrator. Only the person having a system administrator access can give users access to various operating units or business group or inventory org or profiles being set up. All this is done in the system administrator level. So system administrator is the ones who create users, who enable access, who drive the responsibility access and all responsibility definitions are done under the system administrator roles. The system administrator gives the actual access. It is the most important role. The role should be limited to the one who is actually the administrator of the system, who would actually give the required roles or let’s say a new person joins the organisation, the user id creation, his access, his AP access, all that is done by the system administrator. It is the most critical responsibility so it should not be given to anybody; it has be to a limited access people.

  • It is advised to have access to HRMS, General Ledger, purchasing and system administrator responsibilities.

Setting up Multi Org in release 12:

The sequence of steps to be carried out:

  1. Develop or design a structure

  2. Define location

  3. Define business group

  4. Define or use an existing accounting key flex field structure

  5. Create Legal entity

  6. Create Ledgers

  7. Create and Assign operating units and legal entities

  8. Create Inventory Organisation

  9. Run reports

The first step is to develop or design a structure. Here the number of business groups required is decided, what is the kind of ledger structure, what is the kind of account structure that has to be there. How many segments are required? Once the structure is defined it has to be linked. The structure is discussed with the customer and decided.


HCM Delete Diagnostics Sample Use Case

$
0
0

BusinessRequirement

HCM Delete Diagnostics is a tool provided by Oracle to Delete Unwanted Data from the Fusion System.

Sometimes we might have to create Dummy Data into system for CRP ( Customer Room Pilot) and also at times we might encounter situations where incorrect data gets loaded using the Data Exchange Tool. In such scenarios we may use ‘HCM Delete’ as an option.

Specific Roles needs to be attached to the user to be able to use the same.

 

Roles Required to Enable HCM Delete Diagnostics

 

Fusion HCM Nuggets - 1

$
0
0

 

 

What is Fusion HCM (Human Capital Management)?

Fusion – The F word baby born out of a marriage between the best of breed features from Oracle product-lines and the best practices shared by customers.

Fast-to-Implement, low cost cloud based HCM applications comes with deployment options. It offers new enhanced features, functionalities and are built to empower enterprises/organisations.

Besides the traditional HR and Payroll, Fusion HCM comes with new modules like Talent Management (Unleash people power), Workforce Analytics (Decision Making Support) and completely new collaborative tool Network at work (Social Networking for Employees behind firewall).

Oracle Business Intelligence Enterprise Edition (OBIEE)

$
0
0
SUMMARY

OBIEE suite delivers the complete robust set of reporting for the enterprise that delivers ability for reporting, ad hoc query and analysis, online analytical processing (OLAP) and dashboards. In this article we get to know the overview of OBIEE 11g and also witness its features, upgrades, components, utilities etc.
  1. DATA WAREHOUSE

In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW) is a system used for reporting and data analysis. The term data warehouse is coined by bill immon.

A warehouse is a Subject-oriented, Integrated, Time-variant and Non-volatile collection of data in support of management’s decision making process

Subject oriented

Data that give information about a particular subject instead of about company’s ongoing operations

Difference between Operational system and Data Warehouse

OPERATIONAL

DATA WAREHOUSE

Future Analysis

Historical Analysis

Daily or weekly level analysis is made

Month, Quarter and Year level analysis is done

We sell a product that has different categories containing different items

Here we can see the entire set of products like electronics


It focuses on invoices comparison

It gives importance to customer preferences in order to find targeted customer groups


Integrated

Data is gathered into the data warehouse from variety of sources like spreadsheet, oracle database or peoplesoft are merged into a coherent whole as shown below.



Time variant
All data in the data warehouse is identified with a particular time period.



Non Volatile

In operational system since there are daily transactions we take those transactions and keep it sometime in the system and we move that data to the data warehouse.  They don’t stay in the operational database. Whereas, in case of data warehouse the deletion and truncation of data are done thus making it as non volatile.

Difference between OLTP and Data Warehouse



Golden Gate is the reporting tool which immediately replicates any transactions in data warehouse without waiting for ETL process.

  1. DW Architecture



The data from different sources are dumped into staging tables. The first Extract, Transform and Load (ETL) part is to load the data from staging to data warehouse. The second ETL part will focus on all business transformations and data integrity. Now there is something called MDM master data management   

  1. Datawarehousing concepts

    1. Source Systems

  • Systems which are identified for providing data to be integrated in the DW

  • Data in DW can also be sourced from excel sheets and other unstructured sources of data

  • Mostly termed as Transaction Systems,  OLTP, or System of Record

Common Characteristics of Source Systems

  • Applications that run the business

  • Help automate business processes within organizations

  • Optimized for inserts and updates

  • Does not hold historical data (Data is often purged)

  • Source systems can be packaged applications (SAP, MFG/PRO, PeopleSoft, Siebel, Oracle Applications) or custom built solutions


  1. Staging Layer/ Area

  • Predominantly a storage area to hold raw data extracted from the source systems

  • Minimal or No transformation done to extract data from source before loading to staging area

  • Data type conversions, data length normalization etc can be done

  • Data Structures designed to aid in optimizing data loading

  • Data in staging layer is not available for reporting

  • Staging area also serves as a permanent data store for data which might get purged in the source systems

  • Facilitates incremental data loading (handling of rejected records)


  1. Extract Transform Load (ETL)

It comes before and after staging layer and extracts data from source system, transform it and load it. The transformation is nothing but they fall in lined to the business project.

Actions

  • It extracts information from one or more source

  • Approaches – Flat File Vs. Direct Dip

  • Challenges – Incremental, Full Data Extraction, Identifying data changes

  • It performs transformations such as string manipulations, UOM Conversions, Master data validations like Code normalization/Cleansing

  • It loads finally transformed data into the target data structures and the report can be created.

  • Examples of ETL Tools are DataStage, Informatica and DTS


  1. ODS

  • Stands for Operational Data Store

  • Holds the most recent integrated data from multiple sources

  • Usually used by operational teams for day-to-day decision making

  • Data available at a higher level of granularity (business documents are stored)

  • Does not hold historical data – May hold 30-90 days history

  • It is used to store data which is volatile but required by business to make decisions


  1. Enterprise Data Warehouse

  • It is the core component  in the architecture

  • Data sourced from the source is ultimately stored here

  • Data model for EDW are

  • Normalized proposed by Bill Inmon

  • Dimensional Model proposed by Ralph Kimball father of DW


  1. EDW – Dimensional Model

  • It is originated in the mid seventies by A.C.Nielson and made popular by Ralph Kimball

  • The Dimensional Model which is used in BMM layer can be divided into two categories

  • Star Schema – Connected as star by having facts e.g. sales in between and other dimensions e.g. product, time, region and customer connected with the fact. The dimension has a key and the fact has a foreign key for that particular dimension.

  • Snow Flake Schema – Product category is stored in different tables


  1. OLTP Model

Highly normalized typically in 3 NF

C:\Users\PRIYA\AppData\Local\Temp\SNAGHTML2ae002c.PNG




  1. Dimensional Model

It can be converted into the DW model as below

Here the joints are eliminated thus providing more memory space

  1. Dimensions

  • Sets context for asking questions about the facts in the fact table

  • They correspond to the entities by which you want to analyze the business metrics

  • Dimensions have multiple levels like time

  • The combination of levels participate in a hierarchy which is used for data aggregation

  • Multiple hierarchies can be carved out for a dimension such as geographical hierarchy, political hierarchy, sales hierarchy etc

  • The records in dimensional table are less compared to records in fact tables

  1. Facts

  • Contains measures related to a process or event

  • Types of facts

  • Additive : Measures can be added along any and all dimensions for e.g. sales

  • Semi-Additive : Measures can be added along some dimensions and not all e.g. closing stock, bank balance

  • Non-Additive : Cannot be added along any dimension. E.g. Text measures, temperatures etc.

  • It contains vast number of records compared to dimension table

  • Records are mostly appended

  • Can contain either detail or summarized data




  1. OBIEE

  • Oracle’s reporting tool formerly known as Siebel Analytics

  • It is the reporting tool like SAP’s BO and IBM’s Cognos

  • BI reporting tool that helps to dump the data into spreadsheets is integrated with OBIEE

  • It is one of the most popular reporting tools in the market as per Gartner’s quadrant

  • It is used in custom standalone deployments and also in Oracle’s very own OBIA product

  • It caters to multiple levels at organization from C-level executives to store managers

  • OBIEE 11g comes with a rich and interactive UI  and complex calculation features when compared  to 10g

  1. Changes made in OBIEE

  • WebLogic containing Oracle MapViewer application is utilized instead of OC4J java

  • System Management is made using Enterprise Manager for deploying repository

  • It is embedded with built-in web logic server consuming lot of memory when compared to 10g in which 600 MB is enough

  • It employs different file system structure including logs etc.

  • Deployment process is modified

  • No “Oracle BI” windows services which is used in 10g

  • Ability for remote start/stop/restart via OPMN

  • Security model comprising authentication/authorization 2.2is completely changed

  • BI publisher repository is merged into the BI presentation catalog itself so that unnecessary manual calculations can be reduced

  • “Credential Store” is now centrally managed by WebLogic/Fusion Middleware

  1. Changes not made in OBIEE

  • BI Server Repository (RPD) is still a single file at any point of time

  • BI Presentation Catalog that stores reports is still a file system

  • 5 BI Server components such as Server/Presentation Services/Java Host/Scheduler/Cluster controller remain unchanged

  • J2EE deployments such as analytics/bioffice/bipublisher remain unchanged

  • Admin tool and catalog manager tools are present on windows platform only

  • There is only one instance of OBIEE per server (this may change in a future release)

  • Log file naming conventions OBIEEE process like nqserver, sawserver etc remain unmodified

  1. Repository Creation Utility

The RCU is a common utility used for creating repository schemas for the entire Oracle Fusion Middleware platform

This utility is now used to create all the supporting database schemas required by Oracle Fusion Middleware

Oracle BI EE has a single schema that is prompted for during install, this schema will then include all the system tables required for BI Publisher, Balanced scorecard, Usage Tracking


  1. File System Structure

This has completely changed in OBIEEE 11g



  1. File System





  1. .OBIEE 10g Architecture

UI is very dull. It was built using C and C++ and contains less number of java components. RPD is the repository file where we do all the development



The admin tool is used by the developer to create the rpd. Once the rpd is built, it will be deployed into the Oracle BI server. The Oracle BI server interacts with multiple data sources. Once it is deployed, it goes to the presentation services in which the data is displayed in the required format like charts, graph for the end user.






  1. Oracle BI EE Architecture 11g



The supporting Database Schemas is created by RCU that creates supporting database schemas for OBIEE. The BI system components are similar to 10g. The Oracle BI domain has various sub parts. The first subpart at high level is WebLogic  domain with the advent of weblogic server coming into oracle BI. Installing OBIEE means , WebLogic is installed itself. The WebLogic server can be of two types . The weblogic server has one admin server which contains weblogic UI . The weblogic console is mainly used for security.

  1. Weblogic Console

The weblogic console contains two ports

7001 – Admin Server

9704 – BI Server









  1. Enterprise Manager



  1. Sizing

  • In terms of CPU, the Oracle BI EE 11g should support the same number of users as 10g

  • General rule (250 users per CPU core) containing minimum of 2 CPU cores

  • However, the introduction of weblogic and Enterprise manager do consume a lot more memory than the previous OC4J.

  • Each Weblogic Admin/Managed server consumes more than 600MB RAM each

  • Machines running OBI EE 11g should ideally have a minimum size of 6GB RAM.


  1. OBIEE 11g New Features

  • Rich UI compared to 10g

  • There is one platform for security model that has changed after weblogic came into picture, more inlined towards FMW

  • OBIEE 10 g have parent-child relationship whereas  11g supports for other hierarchies like ragged and skip level

  • BI publisher is fully integrated, make it in a single window for reports

  • Multiple reporting of subject area for e.g. oracle general ledger is supported

  • Rich KPI and score carding

  • Time series calculations

  • Rename wizard

  • LDAP Authorization where we can create own set of users


OBIEE - Oracle Business Intelligence Enterprise Edition(Part 2)

$
0
0

This is the continuation of the OBIEE - Oracle Business Intelligence Enterprise Edition(Part 1)

  1. Oracle BI Repository

Oracle BI server stores metadata in repositories i.e. in physical layer

The Oracle BI Administration tool has a GUI that allows server administrators to set up these repositories

Admin tool is used to

Import metadata from databases and other data sources like xml

Simplify and reorganize the imported metadata into business models

In presentation layer, the business model is structured so as to meet the expectation of business users who request BI information via oracle BI UI tools

Open the admin tool in order to open the rpd file.

The rpd file has three layered architecture namely

  • Physical layer

  • BMM layer

  • Presentation layer

 

  1. Physical layer



The physical layer containing physical object creates the data source and import the meta data.

  1. Business Model and Mapping layer



The entire data modelling is done in BMM layer. It takes the request from the user and process that to the BI server engine and creates a physical sql query and then it fetch the database and the get the results back to the user.

  1. Presentation layer



Presentation layer defines UI

The modelling is done in the presentation layer as per the end user requirements.

  1. Repository directory

By default, the repositories are stored in the repository subdirectory where oracle BI software is installed: ORACLE_INSTANCE\bifoundation\OracleBIServerComponent\bi_instance_name_obisn\repository

  1. Repository modes

  • The repository files can be opened for editing in offline mode or online mode

  • In Offline mode all rpds in local machine can be opened since the repository is not loaded into Oracle BI Server memory

  • In Online mode the repository deployed in the server alone can be opened

  • Administrators can perform tasks that are not available in offline mode

Manage scheduled jobs

Manage user sessions

Manage the query cache

Manage clustered servers

  • In Enterprise Manager, the default rpd deployed into the server is displayed.

 

  1. Publish Repository

Go to Enterprise Manager

Go to Business Intelligence, then lock and edit



  1. Restart OBI Services

After publishing repository the OBI services should be restarted as shown below.



  1. Reload Server Metadata

After new changes is done in repository, instead of signing in and out the server metadata can be reloaded.

  1. Query Processing

There are types of queries namely logical query and physical query. The logical query involves logical dimensional tables

The BI server converts the logical query into physical query involved in the underlying database

  1. A user views a dashboard and submits an analysis

  2. Oracle BI presentation services makes a request to Oracle BI server to retrieve the requested data

  3. Using the repository file, Oracle BI server optimizes functions to request the data from the data sources

  4. Oracle BI server receives the data from the data sources and processes as necessary

  5. Oracle BI server passes the data to Oracle BI presentation services

  6. Oracle BI presentation services formats the data and sends it to the query client




  1. Building Physical Layer

 

  • Contains objects representing the physical data sources to which Oracle BI Server submits queries

  • May contain multiple data sources

  • It is the first layer built in a repository



 

  1. Database Objects

It is the highest-level object in the physical layer

Defines the data source to which Oracle BI Server submits queries

ORCL is inherited from tnsnames.ora file



  1. Database Properties



  1. Connection pool

Connection pool is defined as the connection between Oracle BI server and data source

The database objects and connection pool are automatically created when you import the physical schema

Oracle call interface is recommended for connecting to an Oracle data source

The data source name (DSN) information is used to connect to a data source, the number of connections allowed, timeout information, and other connectivity-related administrative details

Multiple connection pools can be created  to improve the performance for groups of users

A single database connection is shared by selecting the Enable connection pooling checkbox that allows multiple concurrent query requests



  1. Physical Table

A physical table is an object in the physical layer of the Administration tool that corresponds to a table in the physical database

Physical tables are imported from a database or another data source, and they provide necessary metadata necessary for Oracle BI server to access the tables with SQL requests

When data source definitions are imported, actual data is not moved. Data remains in the physical data source



  1. Physical table properties



  1. Physical column



  1. Key column

Defines relationships between tables by using primary key and foreign keys

Primary key identified by a key column

  • Uniquely identifies single row of data

  • Consists of columns or set of columns



 

  1. Physical table alias

 

  • An alias table is a physical virtual table object that points to a physical table object. An alias table in the physical layer is like any other alias used in standard SQL notation

  • A common use of aliases is role-playing dimensions, where a single dimension simultaneously appears in the same fact table

  • Alias synchronization is the act of ensuring that the source table and related alias tables have the same column definitions

  • An alias table always inherits all the column definitions from the source table, and synchronization happens automatically



The changes made in the table will be reflected in alias table also. Key columns should be made manually.

Joins



 

  1. Creation of RPD



Import meta data as shown below



Select the metadata as depicted in the fig below



Then select the metadata objects as shown below





Then Verify import in physical layer as shown below



After that verify the connectivity as below



Create alias table as shown in the fig below



As soon as the alias table is created, the joins and keys should be defined as follows



The joins are made as follows



The joins can be made from fact to dimensions (one to many relationship) and is made in BMM layer or reporting layer itself.

XML, Web Services and API's in Java

$
0
0
Objective:

In the previous article How to validate XML against XSD in java we have learned about XML validation method. In this article we will learn about Role of XML in java platform, Introducing web services concept, Web services standard, API’s and tools to develop Java Services.


XML:

XML is a simple text based language which was designed to store and transport data in plain text format. It stands for Extensible Markup Language. Following are some of the salient features of XML.

XML is a markup language.

XML is a tag based language like HTML.

XML tags are not predefined like HTML.

You can define your own tags which is why it is called extensible language.

XML tags are designed to be self descriptive.

XML is W3C Recommendation for data storage and transport.


Example:

<?xml version="1.0"?>
<Class>
  <Name>First</Name>
  <Sections>
     <Section>
        <Name>A</Name>
        <Students>
           <Student>Rohan</Student>
           <Student>Mohan</Student>
           <Student>Sohan</Student>
           <Student>Lalit</Student>
           <Student>Vinay</Student>
        </Students>
     </Section>
     <Section>
        <Name>B</Name>
        <Students>
           <Student>Robert</Student>
           <Student>Julie</Student>
           <Student>Kalie</Student>
           <Student>Michael</Student>
        </Students>
     </Section>
  </Sections>
</Class>


Role of XML in Java Platform:

The features of XML and Java are:

  1. Platform Independent

  2. Security

  3. Scalability

  4. Reliability


The advantages of developing Web application using XML are:

  1. Supports Exchange of data between heterogeneous databases and Systems.

  2. Distributes data processing load to the web browser.

  3. Integrates Java servers with Web browsers.

  4. XML is the natural choice of developing enterprise level web applications using java because of its data portability and platform independence features.

  5. Developers can implement the platform independent features of java to develop applications and exchange application data using XML.


Web services concept:

Web Services tutorial is designed for beginners and professionals providing basic and advanced concepts of web services such as protocols, SOAP, RESTful, java web service implementation, JAX-WS and JAX-RS tutorials and examples.

A Web Service is can be defined by following ways:

  • is a client server application or application component for communication.

  • method of communication between two devices over network.

  • is a software system for interoperable machine to machine communication.

  • is a collection of standards or protocols for exchanging information between two devices or application.

There are three major web service components:


  1. SOAP: SOAP is an acronym for Simple Object Access Protocol.
SOAP is a XML-based protocol for accessing web services.

SOAP is a W3C recommendation for communication between applications.

SOAP is XML based, so it is platform independent and language independent. In other words, it can be used with Java, .Net or PHP language on any platform.

    2. WSDL: WSDL is an acronym for Web Services Description Language.

WSDL is a xml document containing information about web services such as method name, method parameter and how to access it.

WSDL is a part of UDDI. It acts as a interface between web service applications.

WSDL is pronounced as wiz-dull.

    3. UDDI: UDDI is an acronym for Universal Description, Discovery and Integration.

UDDI is a XML based framework for describing, discovering and integrating web services.

UDDI is a directory of web service interfaces described by WSDL, containing information about web services.

UDDI is an XML-based standard for describing, publishing, and finding web services.

  • UDDI stands for Universal Description, Discovery, and Integration.

  • UDDI is a specification for a distributed registry of web services.

  • UDDI is a platform-independent, open framework.

  • UDDI can communicate via SOAP, CORBA, Java RMI Protocol.

  • UDDI uses Web Service Definition Language(WSDL) to describe interfaces to web services.

  • UDDI is seen with SOAP and WSDL as one of the three foundation standards of web services.

  • UDDI is an open industry initiative, enabling businesses to discover each other and define how they interact over the Internet.

Partner Interface Processes (PIPs) are XML based interfaces that enable two trading partners to exchange data. Dozens of PIPs already exist. Some of them are listed here:

  • PIP2A2 : Enables a partner to query another for product information.

  • PIP3A2 : Enables a partner to query the price and availability of specific products.

  • PIP3A4 : Enables a partner to submit an electronic purchase order and receive acknowledgment of the order.

  • PIP3A3 : Enables a partner to transfer the contents of an electronic shopping cart.

  • PIP3B4 : Enables a partner to query the status of a specific shipment.

RESTful web services:

RESTful web services are built to work best on the Web. Representational State Transfer (REST) is an architectural style that specifies constraints, such as the uniform interface, that if applied to a web service induce desirable properties, such as performance, scalability, and modifiability, that enable services to work best on the Web. In the REST architectural style, data and functionality are considered resources and are accessed using Uniform Resource Identifiers (URIs), typically links on the Web. The resources are acted upon by using a set of simple), well-defined operations. The REST architectural style constrains an architecture to a client/server architecture and is designed to use a stateless communication protocol, typically HTTP. In the REST architecture style, clients and servers exchange representations of resources by using a standardized interface and protocol.

The following principles encourage RESTful applications to be simple, lightweight, and fast:

  • Resource identification through URI: A RESTful web service exposes a set of resources that identify the targets of the interaction with its clients. Resources are identified by URIs, which provide a global addressing space for resource and service discovery. See The @Path Annotation and URI Path Templates for more information.

  • Uniform interface: Resources are manipulated using a fixed set of four create, read, update, delete operations: PUT, GET, POST, and DELETE. PUT creates a new resource, which can be then deleted by using DELETE. GET retrieves the current state of a resource in some representation. POST transfers a new state onto a resource. See Responding to HTTP Methods and Requests for more information.

  • Self-descriptive messages: Resources are decoupled from their representation so that their content can be accessed in a variety of formats, such as HTML, XML, plain text, PDF, JPEG, JSON, and others. Metadata about the resource is available and used, for example, to control caching, detect transmission errors, negotiate the appropriate representation format, and perform authentication or access control. See Responding to HTTP Methods and Requests and Using Entity Providers to Map HTTP Response and Request Entity Bodies for more information.

  • Stateful interactions through hyperlinks: Every interaction with a resource is stateless; that is, request messages are self-contained. Stateful interactions are based on the concept of explicit state transfer. Several techniques exist to exchange state, such as URI rewriting, cookies, and hidden form fields. State can be embedded in response messages to point to valid future states of the interaction. See Using Entity Providers to Map HTTP Response and Request Entity Bodies and “Building URIs” in the JAX-RS Overview document for more information.

 

API's:

JAXP:

The Java API for XML Processing (JAXP) is for processing XML data using applications written in the Java programming language.

JAXP leverages the parser standards Simple API for XML Parsing (SAX) and Document Object Model (DOM) so that you can choose to parse your data as a stream of events or to build an object representation of it.

JAXP also supports the Extensible Stylesheet Language Transformations (XSLT) standard, giving you control over the presentation of the data and enabling you to convert the data to other XML documents or to other formats, such as HTML.

JAXP also provides namespace support, allowing you to work with DTDs that might otherwise have naming conflicts. Finally, as of version 1.4, JAXP implements the Streaming API for XML (StAX) standard.

Designed to be flexible, JAXP allows you to use any XML-compliant parser from within your application. It does this with what is called a pluggability layer, which lets you plug in an implementation of the SAX or DOM API.

The pluggability layer also allows you to plug in an XSL processor, letting you control how your XML data is displayed.

SAX:

This lesson focuses on the Simple API for XML (SAX), an event-driven, serial-access mechanism for accessing XML documents.

This protocol is frequently used by servlets and network-oriented programs that need to transmit and receive XML documents, because it is the fastest and least memory-intensive mechanism that is currently available for dealing with XML documents, other than the Streaming API for XML (StAX).

Setting up a program to use SAX requires a bit more work than setting up to use the Document Object Model (DOM). SAX is an event-driven model (you provide the callback methods, and the parser invokes them as it reads the XML data), and that makes it harder to visualize. Finally, you cannot "back up" to an earlier part of the document, or rearrange it, any more than you can back up a serial data stream or rearrange characters you have read from that stream.

For those reasons, developers who are writing a user-oriented application that displays an XML document and possibly modifies it will want to use the DOM mechanism described in Document Object Model.

However, even if you plan to build DOM applications exclusively, there are several important reasons for familiarizing yourself with the SAX model:

  • Same Error Handling: The same kinds of exceptions are generated by the SAX and DOM APIs, so the error handling code is virtually identical.

  • Handling Validation Errors: By default, the specifications require that validation errors be ignored. If you want to throw an exception in the event of a validation error (and you probably do), then you need to understand how SAX error handling works.

  • Converting Existing Data: As you will see in Document Object Model, there is a mechanism you can use to convert an existing data set to XML. However, taking advantage of that mechanism requires an understanding of the SAX model.




Parsing an XML file using SAX

$
0
0

 

Objective:
In the previous article XML, WEB SERVICES AND API'S IN JAVA we learned  about Role of XML in java platform, Introducing web services concept, Web services standard, API’s and tools to develop Java Services. In this article we will learn about the parsing an XML file using SAX.

Parsing an XML file using SAX:
In real-life applications, you will want to use the SAX parser to process XML data and do something useful with it. This section examines an example JAXP program, SAXLocalNameCount, that counts the number of elements using only the localName component of the element, in an XML document. Namespace names are ignored for simplicity. This example also shows how to use a SAX ErrorHandler.

Creating the Skeleton:
The SAXLocalNameCount program is created in a file named SAXLocalNameCount.java.

public class SAXLocalNameCount {
   static public void main(String[] args) {
       // ...
   }
}

Because you will run it standalone, you need a main() method. And you need command-line arguments so that you can tell the application which file to process.

Importing Classes:
The import statements for the classes the application will use are the following.

package sax;
import javax.xml.parsers.*;
import org.xml.sax.*;
import org.xml.sax.helpers.*;

import java.util.*;
import java.io.*;

public class SAXLocalNameCount {
   // ...
}

The javax.xml.parsers package contains the SAXParserFactory class that creates the parser instance used. It throws a ParserConfigurationException if it cannot produce a parser that matches the specified configuration of options. (Later, you will see more about the configuration options). The javax.xml.parsers package also contains the SAXParser class, which is what the factory returns for parsing. The org.xml.sax package defines all the interfaces used for the SAX parser. The org.xml.sax.helpers package contains DefaultHandler, which defines the class that will handle the SAX events that the parser generates. The classes in java.util and java.io, are needed to provide hash tables and output.

Setting Up I/O:
The first order of business is to process the command-line arguments, which at this stage only serve to get the name of the file to process. The following code in the main method tells the application what file you want SAXLocalNameCountMethod to process.

static public void main(String[] args) throws Exception {
   String filename = null;

   for (int i = 0; i < args.length; i++) {
       filename = args[i];
       if (i != args.length - 1) {
           usage();
       }
   }

   if (filename == null) {
       usage();
   }
}

This code sets the main method to throw an Exception when it encounters problems, and defines the command-line options which are required to tell the application the name of the XML file to be processed. Other command line arguments in this part of the code will be examined later in this lesson, when we start looking at validation.

The filename String that you give when you run the application will be converted to a java.io.File URL by an internal method, convertToFileURL(). This is done by the following code inSAXLocalNameCountMethod.

public class SAXLocalNameCount {
   private static String convertToFileURL(String filename) {
       String path = new File(filename).getAbsolutePath();
       if (File.separatorChar != '/') {
           path = path.replace(File.separatorChar, '/');
       }

       if (!path.startsWith("/")) {
           path = "/" + path;
       }
       return "file:" + path;
   }

   // ...
}
If the incorrect command-line arguments are specified when the program is run, then the SAXLocalNameCount application's usage() method is invoked, to print out the correct options onscreen.

private static void usage() {
   System.err.println("Usage: SAXLocalNameCount <file.xml>");
   System.err.println("       -usage or -help = this message");
   System.exit(1);
}

Further usage() options will be examined later in this lesson, when validation is addressed.

Implementing the ContentHandler Interface

The most important interface in SAXLocalNameCount is ContentHandler. This interface requires a number of methods that the SAX parser invokes in response to various parsing events. The major event-handling methods are: startDocument, endDocument, startElement, and endElement.

The easiest way to implement this interface is to extend the DefaultHandler class, defined in the org.xml.sax.helpers package. That class provides do-nothing methods for all the ContentHandlerevents. The example program extends that class.

public class SAXLocalNameCount extends DefaultHandler {
   // ...
}

Handling Special Characters:

In XML, an entity is an XML structure (or plain text) that has a name. Referencing the entity by name causes it to be inserted into the document in place of the entity reference. To create an entity reference, you surround the entity name with an ampersand and a semicolon:
&entityName;
When you are handling large blocks of XML or HTML that include many special characters, you can use a CDATA section. A CDATA section works like <code>...</code> in HTML, only more so: all white space in a CDATA section is significant, and characters in it are not interpreted as XML. A CDATA section starts with <![[CDATA[ and ends with ]]>.

An example of a CDATA section, taken from the sample XML file install-dir/jaxp-1_4_2-release-date/samples/data/REC-xml-19980210.xml, is shown below.

<p><termdef id="dt-cdsection" term="CDATA Section"<<term>CDATA sections</term> may occur anywhere character data may occur; they are used to escape blocks of text containing characters which would otherwise be recognized as markup. CDATA sections begin with the string "<code>&lt;![CDATA[</code>" and end with the string "<code>]]&gt;</code>"

Once parsed, this text would be displayed as follows:

CDATA sections may occur anywhere character data may occur; they are used to escape blocks of text containing characters which would otherwise be recognized as markup. CDATA sections begin with the string "<![CDATA[" and end with the string "]]>".

The existence of CDATA makes the proper echoing of XML a bit tricky. If the text to be output is not in a CDATA section, then any angle brackets, ampersands, and other special characters in the text should be replaced with the appropriate entity reference. (Replacing left angle brackets and ampersands is most important, other characters will be interpreted properly without misleading the parser.) But if the output text is in a CDATA section, then the substitutions should not occur, resulting in text like that in the earlier example. In a simple program such as ourSAXLocalNameCount application, this is not particularly serious. But many XML-filtering applications will want to keep track of whether the text appears in a CDATA section, so that they can treat special characters properly.

Setting up the Parser:

The following code sets up the parser and gets it started:

static public void main(String[] args) throws Exception {

   // Code to parse command-line arguments
   //(shown above)
   // ...

   SAXParserFactory spf = SAXParserFactory.newInstance();
   spf.setNamespaceAware(true);
   SAXParser saxParser = spf.newSAXParser();
}

These lines of code create a SAXParserFactory instance, as determined by the setting of the javax.xml.parsers.SAXParserFactory system property. The factory to be created is set up to support XML namespaces by setting setNamespaceAware to true, and then a SAXParser instance is obtained from the factory by invoking its newSAXParser() method.



Creating Fusion BIP Reports From Static Data

$
0
0

Business Requirement

While we all have been accustomed and bored with always creating Data Model using SQL Query we might have missed other options as in this screen-shot:

STEPS
Sample Data is Saved in Local Desktop and uploaded. Once the same is done the screen will appear as




Final Output


Migration

This Report may be migrated into your specific environment. You would need the below three things :

  1. SampleCSVData

  
  1. StaticDataBIP ( Data Model)

             
  1. StaticDataBIPReport

         

Coexistence Mapping

$
0
0

 

  

Mapping Oracle E-Business Suite HRMS Data for HCM Coexistence:

You use Web Applications Desktop Integrator (Web ADI) spreadsheets to map Oracle E-Business Suite Human Resources Management Systems (HRMS) objects to Oracle Fusion objects. Only when this mapping is complete can you import E-Business Suite HRMS data into Oracle Fusion. You can find complete instruThis topic provides a summary of the Web ADI spreadsheet review and completion tasks, for which you use the E-Business Suite HRMS responsibility HCM Coexistence Manager. Functions in the documents Integrating Oracle HRMS 12.1 with Oracle Fusion Talent Management and Workforce Compensation and Integrating Oracle HRMS Release 11i with Oracle Fusion Talent Management and Workforce Compensation.

Identifying Business Groups for Integration

You perform the task Identify Business Groups for Integration. In the Web ADI spreadsheet invoked by this task, you indicate, for each business group in the E-Business Suite HRMS instance, whether its data is to be integrated with Oracle Fusion.
Once this task is complete, you can import cross-reference data from Oracle Fusion and complete the mapping task.

Mapping E-Business Suite HRMS Data to Oracle Fusion

You perform the Define Mapping task (Release 12.1) or the Setup: Mapping function (Release 11i).
The purpose of the mapping task is to ensure that:  
Every E-Business Suite HRMS data value has a corresponding Oracle Fusion data value.  Every Oracle Fusion data value is accounted for in the mapping.
The mapping activity comprises several mapping subtasks, for each of which you complete a Web ADI spreadsheet. The following table summarizes the mapping substasks.

Mapping Subtask

Description

Map business group defaults

For a specified business group, you select the name of a default reference data set and a default legal employer. You also enter the name of the business unit to be created for the business group, and optionally select an eligibility profile to filter person data. You perform this task for every business group identified for integration.

Map HR organizations to legal employers

For each HR organization in a specified business group, you can select the name of the legal employer in Oracle Fusion to which the HR organization maps. You perform this subtask only if you want to map HR organizations to legal employers.

Map key flexfield segments for job,

position, grade, and competency flexfields

In E-Business Suite HRMS, key flexfields are used to hold job, position, grade, and competency definitions. In Oracle Fusion, the code and name

attributes for job, position, grade, and competency are stored in the base

tables rather than in key flexfields. Therefore, the data captured in segments

for these entities in E-Business Suite HRMS must be mapped to the code

and name values in Oracle Fusion.

Map user defined lookups

For each lookup type in E-Business Suite HRMS, the Oracle Fusion equivalent

lookup type is preselected. For each lookup code, you enter the Oracle Fusion

equivalent value.

Map EBS person types to Fusion person types

For each person type in a specified business group, you select an equivalent

in Oracle Fusion.

Map actions and action reasons

For E-Business Suite HRMS entities such as employee leaving reasons and

salary components, you select equivalent actions and action reasons in Oracle

Fusion.

Map qualification type category to content type

(for Oracle Fusion Talent Management)

For each qualification type category in E-Business Suite HRMS, you select

a content type in Oracle Fusion from the imported list of values.

Map elements

(for Oracle Fusion Workforce Compensation)

For each element and associated element input value in a specified business

group, you select an equivalent in Oracle Fusion.


Once the mapping subtasks are complete, you perform the task Process and Validate to ensure that the mapping selections made for specific subtasks are valid.

Once the mapping is successfully validated, you can extract E-Business Suite HRMS data for delivery to Oracle Fusion. For this task, you use the E-Business Suite HRMS responsibility HCM Coexistence Integrator.      

Mapping PeopleSoft HRMS Data for HCM Coexistence:  

You use Domain Value Maps (DVMs) to map Oracle PeopleSoft HRMS objects to Oracle Fusion objects. Only when this mapping is complete can you import PeopleSoft HRMS data into Oracle Fusion. This topic provides a summary of the DVM review and completion tasks. You can find complete instructions in the PeopleSoft HRMS document Integrating PeopleSoft HRMS 8.9 with Fusion Talent and Workforce Compensation.

Understanding System Data Codes in DVMs

PeopleSoft HRMS supplies multiple DVMs. Each of the supplied DVMs has a system data code that identifies whether the DVM contains system data values (equivalent to predefined values in Oracle Fusion). The system data code also specifies how to complete the DVM definition such that:  
Every PeopleSoft HRMS data value has a corresponding Oracle Fusion data value.  
Every Oracle Fusion data value is accounted for in the mapping.
Unless you review and complete the DVMs, data cannot be imported successfully to Oracle Fusion.
Completing the DVM Definitions
For this activity you work in both PeopleSoft HRMS and Oracle Fusion. You also need the .dat files from the Generate Mapping File for HCM Business Objects process in Oracle Fusion.
The following table summarizes how to complete the DVM for each system data code.

System Data Code

Meaning

PF

The DVM contains both PeopleSoft HRMS system data values and their corresponding Oracle Fusion values. No action is required unless you need to modify any of the delivered values.

P

The DVM contains PeopleSoft HRMS system data values only. You must:
1. Verify the delivered values.
2. Review the equivalent Oracle Fusion values in Oracle Fusion.
3. Add PeopleSoft HRMS values that have no existing Oracle Fusion equivalents to the Oracle Fusion lookup type.
For example, for the DVM PERSON_HIGHEST_EDU_LEVEL_CODE, review
the PeopleSoft HRMS values in the DVM and compare them with the values in the Oracle Fusion lookup type
HIGHEST_EDUCATION_LEVEL . You add PeopleSoft
HRMS values that have no existing Oracle Fusion equivalents to
the Oracle Fusion HIGHEST_EDUCATION_LEVEL lookup.
To complete the mapping you add all of the Oracle Fusion values to the DVM, including any that you added to the lookup type, using the PeopleSoft HRMS Populate Domain Value Maps page.

M

To complete the mapping you add all of the Oracle Fusion values to the DVM, including any that you added to the lookup type, using the PeopleSoft HRMS Populate Domain Value Maps page.The DVM contains neither PeopleSoft HRMS nor Oracle Fusion system data values. This system data code identifies values such as action and action reason codes that exist in Oracle Fusion only. To populate this DVM, you need to import the Oracle Fusion values. You: 1. Review and manually modify the file of mapping identifiers (the .dat file) that was created by running the Oracle Fusion Generate Mapping File for HCM Business Objects process.
2. Save the modified .dat file as a .csv file, and import the values into the DVM using the PeopleSoft HRMS Import Value Maps page.

N

The DVM contains neither PeopleSoft HRMS nor Oracle Fusion system data values.
The PeopleSoft HRMS values exist, but not as system data values.
To populate the DVM, you:
1. Retrieve the PeopleSoft HRMS values using the SQL script supplied in the DVM.
2. Compare the PeopleSoft HRMS values to those in the corresponding Oracle Fusion lookup.
3. Add PeopleSoft HRMS values that have no existing
Oracle Fusion equivalents to the Oracle Fusion lookup. For example, for the DVM PERSON_ETHNICITY_CODE, you retrieve the PeopleSoft HRMS values using the supplied SQL script and compare them to the values in the Oracle Fusion lookup type PER_ETHNICITY. You add PeopleSoft HRMS values that have no existing Oracle Fusion equivalents to the PER_ETHNICITY lookup.

X

To complete the mapping, you add both sets of values to the DVM using the PeopleSoft Populate Domain Value Maps page. The DVM contains neither PeopleSoft HRMS nor Oracle Fusion data. Steps for completing this DVM are identical to those for completing DVMs with the system data code M, except that you must add specific values to these DVMs.

Synchronization of User and Role Information with Oracle Identity Management: How It Is Processed?

Oracle Identity Management (OIM) maintains Lightweight Directory Access Protocol (LDAP) user accounts for users of Oracle Fusion Applications. OIM also stores the definitions of abstract, job, and data roles, and holds information about roles provisioned to users.
Most changes to user and role information are shared automatically and instantly by Oracle Fusion Human Capital Management (HCM) and OIM. In addition, two scheduled processes, Send Pending LDAP Requests and Retrieve Latest LDAP Changes, manage information exchange between Oracle Fusion HCM and OIM in some circumstances.   
Send Pending LDAP Requests sends to OIM bulk requests and future-dated requests that are now active.  
Retrieve Latest LDAP Changes requests from OIM changes that may not have arrived because of a failure or error, for example.




Settings That Affect Synchronization of User and Role Information

You are recommended to run the Send Pending LDAP Requests process at least daily to ensure that future-dated changes are identified and processed as soon as they take effect. Retrieve Latest LDAP Changes can also run daily, or less frequently if you prefer. For example, if you know that a failure has occurred between OIM and Oracle Fusion HCM, then you can run Retrieve Latest LDAP Changes to ensure that user and role information is synchronized.
When processing bulk requests, the batch size that you specify for the Send Pending LDAP Requests process is the number of requests to be processed in a single batch. For example, if you specify a batch size of 25, 16 batches of requests will be created and processed in parallel if there are 400 requests to be processed.

How Synchronization of User and Role Information Is Managed  

Synchronization of most user and role information between Oracle Fusion HCM and OIM occurs automatically. However, when you run Send Pending LDAP Requests to process future-dated or bulk requests, it sends to OIM:   
Requests to create, suspend, and re-enable user accounts.
When a person record is created in Oracle Fusion HCM, OIM creates a user account automatically. When all of a person's work relationships are terminated and the person has no roles, the person's user account is suspended automatically. If the person is subsequently rehired, the suspended account is automatically re-enabled.  
Role provisioning and role deprovisioning changes for individual users.  
Changes to relevant person attributes for individual users.  
New and updated information about HCM data roles, which are created in Oracle Fusion HCM.
The process Retrieve Latest LDAP Changes sends to Oracle Fusion HCM:  
Names of new user accounts.
When a person record is created in Oracle Fusion HCM, OIM creates a user account automatically and returns:
o The user account name and password. If the user's primary work e-mail address was entered when the person record was created, then the user account name and password are returned to the user; otherwise, this information is returned to the primary work e-mail address of the user's line manager. (No notification is sent if the user has no line manager or the line manager has no primary work e-mail address.)
o The globally unique identifier (GUID) associated with the LDAP directory user account, which is added automatically to the person record.  
Latest information about abstract, job, and data roles.
OIM stores latest information about all abstract, job, and data roles, including HCM data roles. Oracle Fusion HCM maintains a local copy of all role names and types so that lists of roles presented in role mappings and elsewhere are up to date.
Note
New HCM data roles are available only when OIM has returned information about those roles to Oracle Fusion HCM.  
Work e-mail addresses, if OIM owns the work e-mail address.
The values of the following person attributes are sent to OIM automatically whenever a person record is created and whenever any of these attributes is subsequently updated.  
Person number  
System person type from the person's primary assignment  
Latest start date of the current period of service  
The GUIDs of all of the person's managers  
Work e-mail address, if Oracle Fusion HCM owns the work e-mail address  
Work mobile phone number  
Work phone number  
Work FAX number  
Both the display name and the following name components in all languages in which they have been created in the person record:
o First name
o Middle name
o Last name
o Name suffix  
Both the formatted work-location address and the following components of the work-location address from the person's primary assignment:
o Address line 1
o City
o State
o Postal code
o Country code
No personally identifiable information (PII) is sent from Oracle Fusion HCM to OIM. 

User and Role Synchronization: Explained

Oracle Identity Management (OIM) maintains Lightweight Directory Access Protocol (LDAP) user accounts for users of Oracle Fusion applications. OIM also stores the definitions of abstract,   job, and data roles and holds information about roles provisioned to users. During implementation, any existing information about users, roles, and roles provisioned to users must be copied from the LDAP directory to the Oracle Fusion Applications tables. Once the Oracle Fusion Applications tables are initialized with this information, it is maintained automatically. To perform the initialization, you run the process Retrieve Latest LDAP Changes.
Note
For security and audit best practice, implementation users have person records and appropriate role-based security access. So that appropriate roles can be assigned to implementation users, you must run the process Retrieve Latest LDAP Changes before you create implementation users.
During initial implementation, the installation super user performs the task Run User and Role Synchronization Process to run the Retrieve Latest LDAP Changes process.
Tip
The user name and password of the installation super user are created during installation provisioning of Oracle Fusion Applications. For details of the user name and password, contact your system administrator or the person who installed Oracle Fusion Applications.

Creating Users and Provisioning Roles for HCM Coexistence: Explained

A user is created automatically for each worker record that you load into Oracle Fusion from your source application. User accounts are created and maintained in a Lightweight Directory Access Protocol (LDAP) directory local to Oracle Fusion by Oracle Identity Management (OIM). You must work with your service provider to configure items such as identity policy and password policy in OIM. Users have a user name and password that are specific to their use of Oracle Fusion applications. 

The process for creating users and provisioning roles to them varies according to whether you are performing an initial or incremental data load.

Creating Users and Provisioning Roles for an Initial Data Load

To create users and provision roles to them during the initial data load, you: 

1. Create the role provisioning rules required by your enterprise.
User access to functions and data is determined entirely by the roles that users have, and roles must be provisioned to users. To manage both automatic and manual provisioning of roles to users, you create role mappings. For example, you create role mappings to provision abstract roles, such as employee and line manager, automatically to all employees and line managers. If you create data roles for particular job roles, you must create role mappings to manage the provisioning of those roles to eligible users. A typical user has multiple roles. To create role mappings, you perform the Manage HCM Role Provisioning Rules task.
Note
If your initial data load includes large volumes of person and employment data, you are recommended to perform step 1 (this step) after step 3.
2. Load person and employment data. For the initial data load, you perform the Oracle Fusion Functional Setup Manager task Load HCM Data for Coexistence.
3. Run the Send Pending LDAP Requests process.
This process sends bulk requests to OIM to create, suspend, and re-enable user accounts, as appropriate.
4. Apply autoprovisioning, using the Manage Role Mappings task, to assign all roles with the Autoprovision option selected to eligible workers.
5. Manually assign roles, as appropriate.
Roles identified in your role provisioning rules as Requestable can be assigned to other workers by managers and human resource specialists who satisfy the role mapping conditions. Workers who satisfy the role mapping conditions can request for themselves roles identified in your role provisioning rules as Self-requestable.

Creating Users and Provisioning Roles for an Incremental Data Load

When you load person and employment data after the initial data load, the process for managing users and role provisioning is as follows:
1. Update role provisioning rules, if necessary.
The role mappings that you created for the initial data load may be sufficient; however, you are recommended to validate the existing mappings and make any changes before you perform an incremental data load.
2. Load person and employment data using the Load HCM Data task in the Data Exchange work area.
3. Run the Send Pending LDAP Requests process. You can schedule this process to run automatically. For example, you could schedule this process to run daily.  
4. Manually assign requestable roles, as appropriate.

R 12 - Article 4 - Multiple organisations

$
0
0

This is the continuation of Article-3 of Multiple organisations.

Sequence of steps:
Define the Structure:



In the above structure, ABC Corporation is the business group and has three ledgers, India, America and England.  The first Ledger India is assigned to the legal entity A Corporation and it has A1 operating unit.  Multiple operating units could also be linked to a single ledger, like in the second case. In the case of C it involves manufacturing so there is a inventory organisation linked to it.
A structure is defined by having the business group on the top and then the ledger and depending on the requirement it could be three business groups or it could be a single business group and then under each business group we will have ledgers and corresponding legal entities. One legal entity could be assigned to multiple operating units. B1 and B2 are to ledger America and legal entity B Corporation. This is how it could be linked across. In the third case there is an inventory compared with the first and second case where there is no inventory.  If general ledger is used then there is no need to define an operating unit, legal is entity is sufficient.  Operating uniits are used for sub ledger accounting. Ledger sets are multiple ledgers grouped together in one responsibility. It is grouping of similar ledgers. Sometimes we can create one ledger set and there would be an option where through a single responsibility we have access to multiple ledgers. It is not mandatory. It is an advanced feature in release 12.  
Scenario where ledger sets could be used:  Ledger sets can be used when there are multiple ledgers and everything has to come under one group. It would be a consolidation ledger where four or five ledgers having the same chart of account structure. In such a situation it could be assigned to a ledger set and then the ledger set could be assigned to a responsibility. The person having access to the responsibility will be able to see all the ledgers instead of a single ledger.

In the above structure ledger India, ledger America and ledger England could be combined to a ledger set and assigned to responsibility. Ledgers are separated based on the 4 Cs. The case for making a ledger set would be when the total revenue is to be valued. It could also have a read write access too in terms of data access set. A ledger set could be defined with the group companies and it could be attached to the main ledger. Then that ledger we could view read all information across ledgers. Consolidation point of view is the criteria. Another way of creating sets would be consolidating continent wise. All European countries could be clubbed to a set.

  1. Define a location:

For defining a location there should be access to a HRMS responsibility.





In the above example, instance is for vision enterprises. The first step would be to go into the HRMS responsibility, and then the location is defined. Here we define the address.
For Example there is an Indian address, a US address and UK address.
All the locations/ addresses would be defined under the work structure window. It is not a legal address but the address mapped to the business group, operating unit or inventory organisation.
To define an address the navigation would be we select the responsibility as HRMS> the under work structure > location.

For the Indian address we choose the address style as India from the lookups, to facilitate the address updating for address as per the Indian context.
The address fields would then be populated and then we have to fill the fields appropriately. The below screen shot gives the idea of the fields to be filled if we choose the address style as India.

It asks for the Locality, town, city, state, pin code and country.
It has also has fields for the fax number and email. The same address could be attached to the operating unit, business group etc.
So based on the country selected the address style would change. The address style means the information would be displayed in a different way.
If it were for the country USA, it would be state, county, city and zip code. So in location the address style would be based on the country.

The time zone could be selected. Then the shipping details are selected. What is the ship to location?
The to ship site; the bill to office site, receiving site, internal site, and office site are checked with the reference to the location.
The options could be checked based on the scenario. It is shown in the screen shot below.

Multiple organisations

R 12 - Article 5 Multiple organisations

$
0
0

Let us continue from Article 4: Multi Organisation

3. Define the business group:

Once the location is defined it has to be attached to the business group or the ledger etc. There can be one to one locations for a business group and a business group cannot have different locations. One operating unit can have one address. If there are multiple business groups there could be same locations or different locations.



Business group is a part of HRMS, so we have start by going to the HRMS responsibility to define a business group.

Navigate to work structures >organisation > description


Click on new to create a new organisation as a business group.


It would give the particulars of the organisation.



Here the location is attached to the business group. ABC Corp business group is the name of the business group and it attached to the location as shown in the screen shot above.  The location that was defined is attached to the organisation.  Whatever location is previously defined will appear here to attach. ABC Corp India is a location of ABC Corp business group. It is a one to one relationship. In the above screen shot the field in white – organization classification allows the system to know whether it is a business group or an operating unit.  Under this business group is to be selected.





Once the box is checked after selecting the business group it would get enabled.



This would indicate the business group definition. The most important step is classification because in classification it identifies whether it an operating unit or is it a business group etc. This classification is very important for identification.  The list of values in the option also has operating units.  In order to make the business group ready to work with the box has to be enabled or checked.


There is some additional information to be inserted at each business group level which is under the business group info.  In the list of values business group info has a *. The * character indicates it is mandatory.  Without giving this information we cannot go ahead.


As shown in the above two screen shots, we would have to fill the information.  The short name is given, if the employee number generation is automatic then the field would have to indicate it as automatic. All the values are defined for the HRMS purpose like the job flex field, costing flex field, position flex field etc.  The legislation code is India and the currency is INR. This is specifically required for the purpose of the HRMS. These are the key flex fields. These are predefined flex field structures. The fields are not automatically populated but they have to be selected.  If the set up business group is used and is over written then filling the fields would be automatic.  The short name has to be given and the other fields would be automatically populated.  If required own ones can be defined also.  When a new one is defined all the fields have to be given manually.  The information is mandatory and without the information it cannot be saved.  The GO button is pressed after entering the information.  


Then click on OK and save the record. It is shown in the below screen shots.



4. Define or use an Accounting Key Flex field structure:
After the business group is configured, the next step is to define the accounting key flex field structure. Key flex field structure means the components used to define the structure.  What is referred here is the chart of account structure.
Ledger is a combination of 4 Cs – Chart of Account, Currency, Calendar and Sub ledger accounting methods (new in release 12).  All the four have to be defined and attached to be used.
It is defined by going to the general ledger as shown below.


GL owns the chart of account. Ledger configuration is completely owned by ledger responsibility. The first step would be to go the general ledger responsibility.


Then we have to go key flex fields> key> segments.  Segment means how the data to be shown or what is the structure, it could be a company, department, account. In some cases there are intercompany segment and in some cases there are cost inters. Depending on the business requirement, the chart of account is structured. Based on the requirement an existing chart of account structure could be used in a different ledger or a new ledger.
(For a functional consultant it takes time and effort to move into an ideal oracle application. It is important to ensure all client requirements or reporting requirements are met before the chart of account structure is defined.  The basis at which the structure is designed has to be looked into, the way it or how it is defined.)  Ideally some things are mandatory like there should be a company (company code), assets liabilities have to be looked at.  Though these are mandatory they are not sufficient enough to give the complete chart of account structure. In some cases, companies have eight to nine structures. The components like company and nature of account is a segment in oracle. Based on the requirement we could define or use an existing chart of account structure in a different ledger or a new ledger. In most cases companies may have different ledger but the chart of accounts remains the same so that consolidation could be done with ease. It important to know the basis of chart of accounts structure, the number of segments, the size of the segment, the number of characters (EG: Is it a 2 character company code?)


To define a structure, like operation accounting flex, we have to go the segments and define individual fields i.e. fields that have to be populated for creating the chart of account structure. It is the chart of account structure name for example the operation account. This structure is attached to the ledger.
First step is to define the structure and then the segments under that. Here the structure code is operation accounting flex.
Then the segment is clicked as shown the screen shot and then the values are defined like company, department, account etc.


Conversion of Object to XML document and XML to Object in Java.

$
0
0

Objective:

In the previous article Parsing an XML file using SAX we learned about parsing an XML file using SAX. In the article we will learn about How to convert an object into XML document and XML document into Object.


Converting Object to XML in Java:

By the help of Marshaller interface, we can marshal(write) the object into xml document. In the previous page, we have seen the simple example of converting object into xml.

In this example, we are going to convert the object into xml having primitives, strings and collection objects.

Let's see the steps to convert java object into XML document.

 

  1. Create POJO or bind the schema and generate the classes

  2. Create the JAXBContext object

  3. Create the Marshaller objects

  4. Create the content tree by using set methods

  5. Call the marshal method

Example 1:

File: Question.java

import java.util.List;  

import javax.xml.bind.annotation.XmlAttribute;  

import javax.xml.bind.annotation.XmlElement;  

import javax.xml.bind.annotation.XmlRootElement;  

@XmlRootElement  

public class Question {  

private int id;  

private String questionname;  

private List<Answer> answers;  

public Question() {}  

public Question(int id, String questionname, List<Answer> answers) {  

   super();  

   this.id = id;  

   this.questionname = questionname;  

   this.answers = answers;  

}  

@XmlAttribute  

public int getId() {  

   return id;  

}  

public void setId(int id) {  

   this.id = id;  

}  

@XmlElement  

public String getQuestionname() {  

   return questionname;  

}  

public void setQuestionname(String questionname) {  

   this.questionname = questionname;  

}  

@XmlElement  

public List<Answer> getAnswers() {  

   return answers;  

}  

public void setAnswers(List<Answer> answers) {  

   this.answers = answers;  

}  

}


File: Answer.java

public class Answer {  

private int id;  

private String answername;  

private String postedby;  

public Answer() {}  

public Answer(int id, String answername, String postedby) {  

   super();  

   this.id = id;  

   this.answername = answername;  

   this.postedby = postedby;  

}  

public int getId() {  

   return id;  

}  

public void setId(int id) {  

   this.id = id;  

}  

public String getAnswername() {  

   return answername;  

}  

public void setAnswername(String answername) {  

   this.answername = answername;  

}  

public String getPostedby() {  

   return postedby;  

}  

public void setPostedby(String postedby) {  

this.postedby = postedby;  

}  

}  

File: ObjectToXml.java

import java.io.FileOutputStream;  

import java.util.ArrayList;  

import javax.xml.bind.JAXBContext;  

import javax.xml.bind.Marshaller;  

public class ObjectToXml {  

public static void main(String[] args) throws Exception{  

   JAXBContext contextObj = JAXBContext.newInstance(Question.class);  

   Marshaller marshallerObj = contextObj.createMarshaller();  

   marshallerObj.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);  

   Answer ans1=new Answer(101,"java is a programming language","ravi");  

   Answer ans2=new Answer(102,"java is a platform","john");  

   ArrayList<Answer> list=new ArrayList<Answer>();  

   list.add(ans1);  

   list.add(ans2);

Question que=new Question(1,"What is java?",list);  

marshallerObj.marshal(que, new FileOutputStream("question.xml"));  

}  

}  

//Output:

The generated XML file :

File: Question.xml  

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>  

<question id="1">

<answers>  

<answername>java is a programming language</answername>

<id>101</id>

<postedby>ravi</postedby>  

</answers>  

<answers>  

<answername>java is a platform</answername>  

<id>102</id>  

postedby>john</postedby>  

</answers>  

<questionname>What is java?</questionname>  

</question>  

Converting XML into Object:

By the help of UnMarshaller interface, we can unmarshal(read) the object into xml document.

In this example, we are going to convert simple xml document into java object.

Let's see the steps to convert XML document into java object.

  1. Create POJO or bind the schema and generate the classes

  2. Create the JAXBContext object

  3. Create the Unmarshaller objects

  4. Call the unmarshal method

  5. Use getter methods of POJO to access the data

XML Document

File: question.xml

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>  

<question id="1">  

<answers>  

<answername>java is a programming language</answername>  

<id>101</id>  

<postedby>ravi</postedby>  

</answers>  

<answers>  

<answername>java is a platform</answername>  

<id>102</id>  

<postedby>john</postedby>  

</answers>  

<questionname>What is java?</questionname>  

</question>


POJO classes

File: Question.java

import java.util.List;  

import javax.xml.bind.annotation.XmlAttribute;  

import javax.xml.bind.annotation.XmlElement;  

import javax.xml.bind.annotation.XmlRootElement;  

@XmlRootElement  

public class Question {  

private int id;  

private String questionname;  

private List<Answer> answers;  

public Question() {}  

public Question(int id, String questionname, List<Answer> answers) {  

   super();  

   this.id = id;  

   this.questionname = questionname;  

   this.answers = answers;  

}  

@XmlAttribute  

public int getId() {  

   return id;  

}  

public void setId(int id) {  

   this.id = id;  

}  

@XmlElement  

public String getQuestionname() {  

   return questionname;  

}  

public void setQuestionname(String questionname) {  

   this.questionname = questionname;  

}  

@XmlElement  

public List<Answer> getAnswers() {  

   return answers;  

}  

public void setAnswers(List<Answer> answers) {  

   this.answers = answers;  

}  

}  

File: Answer.java

public class Answer {  

private int id;  

private String answername;  

private String postedby;  

public Answer() {}  

public Answer(int id, String answername, String postedby) {  

super();   

this.id = id;  

this.answername = answername;  

this.postedby = postedby;  

}  

public int getId() {  

   return id;  

}  

public void setId(int id) {  

   this.id = id;  

}  

public String getAnswername() {  

   return answername;  

}  

public void setAnswername(String answername) {  

   this.answername = answername;  

}  

public String getPostedby() {  

   return postedby;  

}  

public void setPostedby(String postedby) {  

   this.postedby = postedby;  

}  

}  

Unmarshaller class

File: XmlToObject.java

import java.io.File;  

import java.util.List;  

import javax.xml.bind.JAXBContext;  

import javax.xml.bind.JAXBException;  

import javax.xml.bind.Unmarshaller;  

public class XmlToObject {  

public static void main(String[] args) {  

try {  

       File file = new File("question.xml");  

       JAXBContext jaxbContext = JAXBContext.newInstance(Question.class);  

       Unmarshaller jaxbUnmarshaller = jaxbContext.createUnmarshaller();  

       Question que= (Question) jaxbUnmarshaller.unmarshal(file);  

       System.out.println(que.getId()+" "+que.getQuestionname());  

       System.out.println("Answers:");  

       List<Answer> list=que.getAnswers();  

       for(Answer ans:list)  

         System.out.println(ans.getId()+" "+ans.getAnswername()+"  "+ans.getPostedby());  

     } catch (JAXBException e) {  

       e.printStackTrace();  

     }  

  }  

}  

//Output:
1 What is java?
Answers:
101 java is a programming language ravi
102 java is a platform john








Oracle Fusion HCM Employment Models

$
0
0

Oracle Fusion HCM has a concept of Employment model. Lets try to understand what it is and why it is required and how it works. We will also see how to decide which model is best for your client by asking certain questions. The 3 entities core to the concept of newly introduced Employment model are shown below






  1. Assignment – Inherited straight from EBiz 11i and R12, It is a collection of worker’s personal and work related information.
  1. Work Relationship – The Relationship between a legal employer and a worker which can exists in any of this 3 Types (Employee, Contingent Worker, Non-Worker)
  1. Employment Terms – Terms and conditions (Contract details can be included) for a worker’s assignment.


Two main types of Employment Model and its variations-

1. Two-Tier Employment Model -
This model consists of 2 Entities - Work Relationship and Assignment.

  • Single Assignment
  • Single Assignment with Contract
  • Multiple Assignments

2. Three-Tier Employment Model
This model consists of all three Entities - Work Relationship, Assignment and Employment Terms.
  • Single Employment Terms with Single Assignment
  • Single Employment Terms with Multiple Assignments
  • Multiple Employment Terms with Single Assignment
  • Multiple Employment Terms with Multiple Assignments

The Employment Model for the enterprise or legal employer offers flexibility for a change during and after implementation.
If there exists no work-relationship between Enterprise/Legal Employer, the switch from two-tier to three-tier and vice versa can be done
The switch from one three-tier employment model to any other three-tier employment model can be done at any time.

The Employment Model is tied up closely with the concept of Single Global Person Record, wherein the Employee is entered only once into the system.
This ensures the retrieval of correct data in reporting purpose for multinational enterprises.



 

An Introduction To HCM Data Loader - Next Generation Conversion Tool for Fusion Applications

$
0
0



HCM Data Loader aka (also known as ) HDL is the next generation Data Loading Tool used in Fusion Applications.
Mostly used in all new implementation starting July 2015 this tool has tremendously advanced features compared to its predecessor FBL (File Based Loader).
In this article we would try to understand what HDL is and also a brief understanding of key concepts associated with the same.

So without much ado let’s begin….

Bulk loading of HCM data from any source
Data-migration or incremental updates
Flexible, pipe-delimited file format
Comprehensive bulk loading capabilities
Automated and user managed loading
Stage Table Maintenance






































Screen Shot 2015-02-27 at 11.22.54 (2).png


Screen Shot 2015-02-27 at 11.16.41 (2).png



As with all delivered HCM Extracts, it is recommended that you make a copy of the HCM Data Loader Data Set Summary extract and alter the output to your requirements.
Navigate to the Manage Extract Definitions task available from the Data Exchange work area.
Query the HCM Data Loader Data Set Summary extract.
You click the copy icon to copy the seeded extract, supply your name for the copied extract.
Once your copy is successfully created you can retrieve it by searching for it by name.  Click on the name in the Search Results to make your required changes




Last but not the least \\ Required to Perform Conversion.” . That brings me to end of the topic.
Thanks a lot for all your time .. Have a Nice Day!!!!!!!!!!!!!

HCM Data Loader Overview

$
0
0

HCM Data Loader  also referred to as HDL ( abbreviated form) is the next generation tool from Oracle for loading legacy HCM data into Fusion Applications.
Starting with Release 9 July Monthly Update (Monthly Update Bundle 9.7).
Oracle strongly recommends that all NEW Customers begin using HDL.
Customers currently provisioned on Release 9 will require a configuration change.
All environments provisioned in Release 10 will be defaulted to HDL.
Existing Customers may continue using File Based Loader (FBL) but should begin evaluating HDL to plan a migration in the future, where applicable.
There are a few scenarios where HDL may not be recommended, and an exception may be considered, for both, existing and new Customers.


  1. An existing customer using File-Based Loader who purchases additional test  environment that is created on R10.

  2. The customer must log an SR to change the default setting of Full to Limited to match other environments.

  3. Customers with PeopleSoft Integration.

  4. Customers with Taleo Integration via Taleo Connect Client (TCC) and File-Based Loader.



  1. Is File-Based Loader used for migration only? If so, once migration is complete, then HCM Data Loader could be considered.

  2. Is File-Based Loader used for ongoing integration? If so, then there will need to be rework of processes and a cutover decision.

  3. How are File-Based Loader data files generated? Whatever method is used for generating the File-Based Loader data files will need to be reworked to generate the correct HCM Data Loader format.

  4. The complexity of the integration will need to be taken into account to determine who does the rework of the extract mechanism.

  5. Are you loading objects outside of File-Based Loader and HCM Spreadsheet Data Loader (via SR requested scripts)?  If this is causing delays and issues related to lack of automation, then HCM Data Loader should be considered.

  6. Are there users who load data using HCM Spreadsheet Data Loader?  A move to HCM Data Loader in   R10 would disable this  

  7. Functionality, so it would probably be worth waiting for spreadsheet support.  HCM Data Loader migration should be treated as an implementation with a proper project plan. File-Based Loader GUID values can continue to be used with HCM Data Loader.  A process can be run to convert the File-Based Loader GUID into a source key that HCM Data Loader can recognize.

  8. HR spreadsheet loaders in the Data Exchange work area will not be available to use in conjunction with HCM Data Loader  

  9. HCM Data Loader and File-Based Loader cannot be used at the same time for objects supported by both.

  10. Payroll batch loader is still required for some payroll object loads.

11  Environment refresh will overwrite HCM Data Loader settings if the source environment uses File-Based Loader. You will

           have to follow the process again to enable HCM Data Loader and convert File-Based Loader GUIDs and source keys.    

     12. Once HCM Data Loader is enabled in a test environment, no additional File-Based Loader load testing will be possible

  1. Customers who have recently started implementing and have not yet gone live should consider switching to HCM Data Loader if their timelines can accommodate it.

  2. This will mitigate the need for conversion to HCM Data Loader later in the project lifecycle. Project plans should be reviewed to incorporate the migration to HCM Data Loader, taking into account:

  3. Training on the new tool

  4. Rework of the extract mechanism to get data in the HCM Data Loader format

  5. The need to test the migration and integration processes using HCM Data Loader instead of File Based Loader

  6. The need to fit in with major implementation milestones


  1. Existing live customers already using File-Based Loader and HCM Spreadsheet Data Loader should defer the switch to HCM Data Loader.

  2. Customers who are not yet live should evaluate whether to rework their implementation to use HCM Data Loader or continue using File-Based Loader and HCM Spreadsheet Data Loader.

  3. The main work involved in using File-Based Loader and HCM Data Loader is the extract of the data from a source system to the correct format ready for loading. Since this is not part of Oracle Fusion, Oracle does not provide a conversion process from File-Based Loader to HCM Data Loader.

  4. Oracle does provide the migration of File-Based Loader GUID values to the HCM Data Loader equivalent, which are referred to as source keys.

  5. Customers using Oracle Fusion Taleo Recruitment Out of the Box (OOTB) V1 Integration are not impacted.

  6. If you are using Taleo Connect Client and File-Based Loader or a hybrid with OOTB to integrate with Fusion, you will need to perform an evaluation and follow the steps to migrate to HCM Data Loader


HCM Data Loader and File-Based Loader cannot be used at the same time for objects supported by both. Either of them should be picked for conversion.
The setting of the HCM Data Loader Scope parameter on the Configure HCM Data Loader page determines whether HCM Data Loader or File-Based Loader is used and controls the behavior of the loading tools. The default value of this parameter is Limited for existing customers. If you attempt to load data for a business object not supported in the Limited mode , your whole data set will fail.
Limited mode Only business objects not supported by HCM File-Based Loader can be loaded using HCM Data Loader. All objects that can use File-Based Loader must use File-Based Loader. Any objects that are not available via File-Based Loader should use HCM Data Loader.
Full mode HCM Data Loader is used for bulk-loading data into all supported business objects. HCM File Based Loader and HCM Spreadsheet Data Loader are disabled.
Important Note: You can switch from Limited mode to Full mode, but you cannot switch from Full mode to Limited mode. This is a one-time switch from File-Based Loader to HCM Data Loader.
Once you migrate to HCM Data Loader, HCM Spreadsheet Data Loader is also disabled because it relies on the File-Based Loader engine to load data to Oracle HCM Cloud. This restriction applies only to the spreadsheet loading that is launched from the Data Exchange work area. Other spreadsheet data loaders are not impacted by the uptake of HCM Data Loader.

HCM Data Loader will be Generally Available in R10 ( also in Release 9 Patch Bundle 7 and above ) but there is no immediate requirement to migrate to HCM Data Loader.
HCM Data Loader and File-Based Loader cannot be used at the same time for objects supported by both.
On upgrade to Release 10 you will see the HCM Data Loader options available in the application but you should not use HCM Data Loader if you are an existing File-Based Loader customer until you have completed an evaluation of HCM Data Loader.
Important Note:
There are differences in file format and key structures.
Once the switch to HCM Data Loader has occurred, you will no longer have access to File-Based Loader or HCM Spreadsheet Data Loader.
If you have a requirement to load documents of record or areas of responsibility, then you can use HCM Data Loader in Limited mode with no impact on File-Based Loader or HCM Spreadsheet Data Loader, since these objects are not currently supported by File-Based Loader

If you are live with File-Based Loader and testing HCM Data Loader in a nonproduction environment, then you should plan your environment refresh (P2T) requests carefully.
When you request an environment refresh, the HCM Data Loader settings will be overwritten, and the environment will revert to the default Limited mode.
You will need to go through the same steps as before to switch back to HCM Data Loader. That is, you must convert File-Based Loader GUIDs to HCM Data Loader source keys and switch HCM Data Loader Scope to Full.
During the HCM Data Loader migration validation and testing, important testing considerations must be included in your planning.
HCM Data Loader in Full mode is not compatible with File-Based Loader; therefore, it is not possible to have an environment with both HCM Data Loader and File-Based Loader at the same time.
This will impact your ability to test File-Based Loader transactions in your nonproduction environment while you are in the process of validating HCM Data Loader.
Important Note: You will need to ensure that the HCM Data Loader enabled environment is not required for any File-Based Loader testing prior to setting the HCM Data Loader Scope to Full.

It is not possible to move to HCM Data Loader for individual core objects on an incremental basis. It is a one-time migration and requires careful planning and preparation to ensure a smooth transition.

One of the most important decisions when considering the upgrade from File-Based Loader to HCM Data Loader is whether to continue to use the same key mechanism as is used in File-Based Loader (GUIDs) or whether to take advantage of the user key support that is available in HCM Data Loader.
User keys allow objects to be identified in HCM Data Loader using their natural key; for example, Job Code, Person Number, and so on.
File-Based Loader GUIDs have an equivalent in HCM Data Loader known as source keys. These are values that are defined in the source system and stored alongside the Oracle Fusion surrogate keys when objects are created in Oracle HCM Cloud. Source keys can be used to reference objects when loading related data or to identify specific objects when performing updates or deletes.
Within HCM Data Loader, each object can use different types of keys, so a decision needs to be made on an object-by-object basis to determine whether a user key or a source key will be used.

In order to facilitate the upgrade from File-Based Loader to HCM Data Loader, a process is provided to migrate the File-Based Loader GUIDs to HCM Data Loader source system IDs. Regardless of whether user keys or source keys will be used, it is recommended that this process be run as the first step

Before reworking the export processes, you can download a template for each business object supported by HCM Data Loader. These templates take into account any flex-field structures that are already in place. By using the templates, you can accurately outline the shape of the data that needs to be generated by the reworked export processes.

The main task required for migration to HCM Data Loader is the rework of the export process that generates the data for loading to Oracle HCM Cloud. This process needs to take into account the correct attributes for the HCM Data Loader objects as well as preparing the files in the format expected by HCM Data Loader.
The attached spreadsheet provides a mapping between the HCM Data Loader data file name, file discriminator, and attribute name to the HCM File-Based Loader data file and attribute name.
HCM Data Loader only supports files loaded via Oracle Web-Center Content. If customers are currently using SFTP, then the processes will need to be changed.
Similar to File-Based Loader, HCM Data Loader has a web service that can be used to invoke the HCM Data Loader processing.


Sample Screenshot (Mapping Sheet)


The offline Data File Validator Tool (HDLdi) and used in the extract process to ensure that the data files being prepared are valid in terms of the data format. It also checks any business rules that apply to the data contained in the data file where other Oracle HCM Cloud data is not required as part of the validation.







Worker.dat



Implementing and deploying we b services in Java

$
0
0

Objective:

In the previous article Conversion of Object to XML document and from XML to Object in Java we have learned about the conversion of both the formats. In this article we will see how to implement and deploy web services in Java.


Implement and deploy web service:

This end to end scenario should test creation of simple web service from scratch with simple logging message handler in web project. The scenario covers developing of simple JAX-WS 2.0 based web service with message handler.
The document is not intended as test specification for this feature but describes developing simple web service in NetBeans 5.5

Table of Contents
Creating web project

Creating web service

Implementing web service

Creating message handler

Testing web service

Web application sources are available here.

Creating web project

Go to File - New Project - Web - Web Application

Specify HelloWs as Project Name

Specify project's directory

Choose Java EE 5 as J2EE version and

Click Finish

Creating web service

Go to File - New File - Web services - Web Service - Next

Specify name of web service, eg. HelloWebService

Specify package for the web service, eg. org.netbeans.end2end.hellosample

Click Finish

Implementing web service

Uncomment commented code in source file

Add serviceName="GreeterWs" to WebService annotation

Add operationName="sayHi" to WebMethod annotation

Add @WebParam(name="name") to String param argument of method operation

Add new sayHello(String): String operation to web service using Web Service -> Add Operation... action in editor

Implement both operation, so the implementation class will look like this:



Creating message handler

Go to File - New File - Web services - Message Handler - Next

Specify name of message handler, eg. MessageHandler

Specify package for the message handler, eg. org.netbeans.end2end.hellosample

Click Finish

Implement simple log method in handler, eg. like on following picture:

And add handler to the webservice using Configure Handlers... action in context menu on HelloWebService's node



web serviceTesting 

Now we finished our web application with web service and we're ready to build and deploy it and check web service if it works correctly.

Invoke Deploy action from project's context menu

Invoke Test Web Service action from web service's context menu

Let's test web service using the form opened in web browser:




Viewing all 930 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>