Quantcast
Channel: Apps2Fusion Articles
Viewing all 930 articles
Browse latest View live

An Overview of Future Dated Payment or Bills Payable in Fusion Application

$
0
0

What is Future Dated Payment/Bills Payables in Oracle Fusion Applications?

Future dated payments or Bills Payable in Fusion Application is used to control the timing of your payments, and therefore control your cash flow. A future dated payment helps bank to disburse funds to your supplier's bank on a specific date which is known as the maturity date. For future dated payments, Payables creates journal entries in two stages, first to recognize the payment (reduction of liability), and the second to recognize the clearing of the payment (reduction of cash).

Bills Payables (it’s like Post Dated Cheque) with maturity date on the check. Future dated payments are payments that mature in future, eg bills payable or postdated check.

 

What are the different accounting entries involved in Future Dated Payment/Bills Payables?

Below are the different accounting entries which happens in Oracle Fusion application during Bills Payable.

  1. At the time Payment Issue (Status Issued)

                AP Liability A/c Dr

                                     Bills Payable/Future Dated Payment A/c Cr

  1. At the time of Payment Maturity (Status Maturity)

                    Bills Payable/Future Dated Payment A/c Dr

                                          Cash Clearing A/c Cr

  1. During Bank Reconciliation (Status Cleared)

                 Cash Clearing A/c Dr

                                           Cash A/c Cr

Below are the steps to configure functionality for Future Dated Payment/Bills Payables in Oracle Fusion Application-

  1. Provide Bills Payable Account in Common Options

  2. Create Payment Method in Bills Payables

  3. Create Standard Invoice for Desired Amount

  4. Make Payment and include Maturity date

Provide bills payable account in the common options. Only after maturity date supplier can encash the check we are going to issue.

Search for Task- Manage Common Option for Payables and Procurement through project, Click on Go to Task and enter bills payable account-save and close.

 

Search for Manage Payment Method- go to task and create separate payment method for bills payables.

Provide required details, enter start date and enable- For Use in Payable

Under Bills Payable Tab- enable- “Bills Payable” check box.

Move to Cash Management Tab –enable “Enable for use in Cash Management”

And click “Use payment method to issue bills payable”

Save and Close and Click done.

 

Create standard invoice now for USD 5000, Enter invoice details and under invoice action go to “Manage Instalments” to override default payment method to bills payable method.

 

Save and close, validate this invoice and choose post to ledger.

Now make payment , go to payment and create payment –enter details and select payment method as Bills payable-

Specify maturity date under tab “Advanced” enter date-

Select invoice now on same screen to pay.

Apply and OK.

 

Now navigate to “Schedule Process” and search for “Update Matured Bills Payable Status”

Enter Business Unit and Maturity details.

Run this job.

Check Status for Payment just done, Status should be “Negotiable”

Run create accounting for Payment document number 5085

Check Accounting for Payment done.


High Priority Training

$
0
0

Table of Contents

 

Topic

Page Number

IDCS

2

FCCS

3

Oracle ODI Cloud Service

4

Oracle Account Reconciliation Cloud Service (ARCS)

5

Oracle Analytics Cloud Service ( OACS )

6

Revenue Management

7

IDCS

 

What is IDCS?

Oracle Identity Cloud Service is a security and identity platform that is built for both On-premise and Cloud service(mostly Hybrid Clouds) platforms.IDCS is built on the cloud with the main purpose being to allow a single sign on to work on multiple features while having a up-notched level of security.IDCS is designed keeping in mind the features of cloud services such as Scalability,Elasticity,Ease of Deployment etc.

 

IDCS has functions and licenses which allows users to sign in once to gain complete access to multiple applications, administrators to manage users and grant access to applications,generate data and reports from the data collected thus far.IDCS integrates with existing ERPs,implements Standard OpenId Connect(allows the clients to identify end user based on authentication performed),provides support for SAML2 Browsers(SAML2 is a version of SAML which deals with exchanging authorisation and authentication within security domains).

 

IDCS is integratable with external cloud services(providers other than Oracle) and uses a bridge to connect the external on-premise identities with IDCS,and changes made in the on-premise identity will be moved via the bridge to IDCS to be implemented there as well.This gives the customer rights to decide when they want to move from on-premise to cloud.

 

IDCS provides strong security that can be used for SaaS,PaaS and IaaS.

 

Main Purpose of IDCS

Security  by Providing SIngle Sign On,Identity Management and Identity Authentication.

Easy Access to features since it integrates directly with existing ERPs.

Hybrid Identity while working on cloud or on-premise.

How is IDCS Applicable to the Industry

As a cloud (allowing both on-premise and cloud resources to be secured from a single set of controls)

From Mobile(as a Native App or from the Mobile Browser)

For employee-facing intranet and customer-facing extranet solutions.

Oracle Financial Consolidation & Close Cloud Service (FCCS)

 

Use of FCCS

FCCS was built in order to replace HFM,so there wouldn't be a need for server maintenance and upgrade patches.

 

High level overview

FCCS is used for maintaining financial consolidation(collecting data from various departments and consolding it to be used for reporting purposes) and close process.FCCS allows any organization to customize and build an application with the features that they truly require avoiding the rest.Because of this,instead of working around features that are not needed,they can simply be avoided.FCCS is upgradable with every new release which benefits the customer.

 

Any organization can build and use keeping in mind their customer content.The framework of FCCS has financial intelligence which is inbuilt,which holds access to an ever growing library of prebuilt dimensions, calculations ,reports and dashboards.

 

Oracle ODI Cloud Service (Oracle Data Integration)

 

Purpose

Cloud technology is growing at a rapid rate and almost all business are moving to cloud rather than being on premise.But many companies have come across integration problems,many have missed project deadlines,there is also a major breach of security and there are multiple integration options.All this calls for a good Integration Tool.

 

Oracle Integration Cloud Service integrates applications across clouds and on premise.Integration Cloud Service provides automatic backup and upgrades thereby giving the organization time to concentrate on more important tasks in hand like creating applications.The major benefit that Integration Cloud Service provides is eliminating the need for codes to be written and run in order to perform integration.

 

High level working Idea

The extract load and transform architecture extracts data from the sources, loads it into a target, and executes the transformations .ODI makes Knowledge Modules flexible and extensible.Since ODI has multiple Knowledge Modules Libraries,it can be customized to use which provide high performance.

Oracle Account Reconciliation Cloud Service (ARCS)

 

Purpose

ARCS is built with the sole purpose to manage Reconciliation(to check that two sets of records are in agreement) Process.ARCS provides real time access which makes sure that all reconciliation which occurs is properly qualified.

 

ARCS achieves this by Reconciliation Compliance and Transaction Matching.

Reconciliation Compliance makes sure that the correct format is used to ensure complete justification.Compliance risk based preventive control structure makes sure that there isn't any loss in the quality of data.

 

High level overview

ARCS allows a user to set their own rules  for each process at the beginning of the cycle.

There are prebuilt formats,but organisations can customize additional formats if needed.

Monitoring ,Reporting and Analysis can be done easily.

 

It allows organizations to streamline and get better performance.

One of the Main reasons to use ARCS is that the risk of having 0 balance,low threshold or other risks can be minimized.

Oracle Analytics Cloud Service ( OACS )

 

What is OACS and its purpose

OACS is PaaS which deals with AI ,Machine Learning and Service Automation.OACS is highly user friendly.OACS  is built to provide a dynamic platform which demolishes barriers between various sources which changes the way information is analysed and worked on.

 

OACS Overview

OACS comes in 3 editions to allow a person to choose what they need.

The 3 editions are - Standard edition, Data lake edition and Enterprise edition.

 

  • Standard Edition is the basic edition which offers self service discovery and data prep which is done by ML and can be used across multiple platforms.

  • Data Lake Edition is the higher than Standard edition and so provides all what Standard Edition provides with the addition of accelerated analysis which is done by Essbase(MDBMS).

  • Enterprise Edition is the highest edition offering everything that both Standard edition and Data Lake Edition provide with the addition of support for a full semantic layer(it is a business representation of corporate data) which provides proactive analytics.

Revenue Management

 

This system is used to to keep track of revenue as defined in ACS 606 and IFRS 15 (Contracts with Customers).The framework is able to recognize revenue at any given time.

Oracle Revenue Management Cloud can work with any cloud application as it is not restricted by any barriers.In addition to this it can access data and publish data to both EBS and ERP Cloud Services.

 

Main Purpose

Both ACS 606 and IFRS 15 have new principles which need to be followed,as the old principles are replaced.

 

High Level Working Overview

Revenue Management Cloud identifies any contract with the customer from ERP,EBS or Third Party Sources.This data is then analysed and performance obligations and accounting contracts are created.

The transaction price is then calculated by totaling the order values of all orders.

Based on ACS 606 and IFRS 15 requirements,transaction prices are allocated.Revenue Management has multiple dimensions to arrange and maintain the prices.

Oracle Revenue Management Cloud makes use of subledger accounting rules engine which enables the user to generate entries for multiple ledgers.

Based on when either party recognizes or derecognizes a liability the system based on ACS 606 and IFRS 15 the service accumulates or receives payments over time and revenue is recognized.

Pay Alone Concept in Oracle Fusion Application

$
0
0

In this article user will understand what is pay alone functionality in Oracle Fusion application and how it works.

 

Pay Alone Concept in Oracle Fusion Application-

By default multiple invoices are paid with 1 check, but if we have requirement to pay single invoice, controls are there in Oracle Fusion Application. We can use this concept to pay alone invoice instead of paying multiple invoices. A separate check will be generated by the system to pay single invoice. Below are main steps to enable this functionality-

 

1.Enable Pay Alone feature first at supplier level, site level.

2.Create 2 or more invoices and see which invoice gets selected while creating payments.

Go to Manage Supplier and search supplier and click edit- This option of Pay alone can be set up at various level (Supplier level, site level etc.)

 

Supplier Profile level setup🡪Search for supplier.

Go to Payment Tab and move to Payment Attributes

Go to Payment Specifications-enable check box for Pay Each Invoice Alone-

 

Save this.

Repeat same at Supplier Address Level- edit below information

Enable Pay Each Invoice Alone at address level-

Now go to Site level- edit this information

Same will reflect in transaction

 

Edit this AND Enable Pay Each Invoice Alone at this level.

 

Save this.  Pay Each Invoice Alone feature is not applicable for previously created invoice before enabling this function.  For this supplier pay alone is activated now and all invoices created will be eligible for Pay Alone.

Create 2 or more invoices to see this feature working- While creating invoices, click on manage instalments under invoice action, verify that Pay alone option is defaulting as this was enabled-

 

Create 1 more invoice. Now 2 invoices are there.

Go to payment work bench and create payments.

 

Go to Invoices to Pay and notice Pay alone invoices – Pay Alone feature is showing as Yes

Select 1 of the invoice to Pay Alone and try selecting another invoice, and you will get below error message-

 

This means system will allow only 1 invoice to be paid under Pay Each Invoice Alone functionality.

Now process the payment for these 2 invoices separately.

Oracle Financial Consolidation and Close Cloud Service Training

$
0
0

Oracle Financial Consolidation and Close Cloud Service (FCCS) is a cloud based configurable solution to ensure that financial consolidation processes are compliant, auditable, timely and transparent. It's a subscription-based service built for, and deployed on, Oracle Cloud. That means no hardware setup, and minimal IT support, which makes it simple and quick to deploy.

{tab Course Contents | orange}

{tab-l1 Day 1 | orange}

FINANCIAL CONSOLIDATION AND CLOSE CLOUD OVERVIEW

Describe the financial consolidation and close processes
Identify deployment  use cases
Describe Financial Consolidation and Close related components


DATA MANAGEMENT

Describe use cases for Data Management
Perform administration tasks: predefined system setting profiles, set up source systems, register target applications, set up drill through
Set up definitions for import format, locations, period mappings, category mappings
Perform loading data tasks: create member mappings, define data load rules, run or schedule data load rules
Perform batch processing - define batch, execute batch, schedule job

CONSOLIDATION MODULE - DIMENSIONS

Explain system dimensions and pre-seeded members including requirements and restrictions - upgrade and net new for extra customs
Design and set up Account dimension
Explain Data Source dimension and guidelines for adding member hierarchies
Explain Consolidation dimension
Explian Currency dimension

Design and set up Entity dimension
Design and set up Intercompany dimension
Design Cash Flow reporting, FX calculations, and  Currency Translation Adjustments using Movement dimension
Design GAAP and IFRS financial reporting using Multi-GAAP dimension

{tab-l1 Day 2 | green}

CONSOLIDATION MODULE - BUILDING OUT A FINANCIAL CONSOLIDATION APPLICATION

Set up Financial Consolidation and Close security
Set up valid intersections for data entry and business rules
Create data forms
Execute intercompany matching reports with options
Manage approval process

FINANCIAL CLOSE MANAGER MODULE

Set up close process
Manage task types and templates

Manage close schedules

DESIGNING REPORTS AND DOCUMENTS

Design reports using Financial Reporting Studio

{tab-l1 Day 3 | red}

ORALCE EPM CLOUD FOUNDATION

Set up and configure Cloud security
Perform system maintenance
Build EPM Cloud automation routines - Job Scheduler, EPMAutomate
Build integrations across systems and services - Integrated EPM Business Processes, REST APIs
Perform lifecycle management for different scenarios


CREATING A FCCS APPLICATION

Describe the workflow for creating a Financial Consolidation and Close application
Create a Financial Consolidation and Close application
Explain application features available to be enabled
Design application framework (metadata and user-defined elements)
Design and implement importing and exporting metadata
Design and implement importing and exporting data

{tab-l1 Day 4 | blue}

CONSOLIDATION MODULE - CONSOLIDATIONS, ELIMINATIONS AND TRANSLATIONS

Describe sequence of events in consolidation process
Diagnose data flow of consolidation process and the role of Consolidation dimension
Diagnose anatomy of elimination process and the role of Intercompany dimension
Explain default currency translations for a multi-currency application and methods applied
Create translation rules to override the default translations - amounts/rates, defaults, logic
Diagnose anatomy of converting reporting currencies as it relates to Entity Currency and Parent Currency
Describe calculation status and what actions can change them

SUPPLEMENTAL DATA MODULE

Create data sets and dimension attributes

 

USING SMART VIEW

Explain user taks in SmartView
Analyze data using ad hoc and Smart Forms

{/tabs}
{tab Enroll | grey}

 
 
 
 
 


{tab Training Hours | red}

Start Date: 01st June 2019

Training Schedule: 01, 02, 08 & 09th June 2019

Timing: 12:00 NOON GMT | 08:00AM EST | 5:00AM PST | 7:00AM CST | 6:00AM MST | 5:30PM IST  | 01:00PM GMT+1

This training will run for 4 days over weekends

{/tabs}
{jcomments off}

 

Oracle Account Reconciliation Cloud Service (ARCS) Training

$
0
0

Oracle Account Reconciliation Cloud Service (ARCS) is a purpose-built solution in the cloud for managing and improving global account reconciliations. It provides real-time visibility into the performance of reconciliations and ensures that all reconciliations prepared are properly qualified.

{tab Course Contents | orange}

{tab-l1 Day 1 | orange}

ACCOUNT RECONCILIATION CLOUD HOME

Worklist
Reconciliations
Matching

Dashboards
Reports


APPLICATION

Reconciliation Activity
Overview of Reconciliation Compliance Configuration
Periods
Service

TOOLS

Appearance
Announcements
Service Activity Report
Access Control
The Academy

Settings and Actions
Menu Welcome Panel
Navigator
Managing Preferences

{tab-l1 Day 2 | green}

ADMINISTERING RECONCILIATION COMPLIANCE - LEARNING ABOUT  THE  RECONCILIATION PROCESS

Sample Task Flow Scenarios for Administrators and Power Users
Performing Variance Analysis
Process Overview for Reconciliation Compliance
User Tasks in Reconciliation Compliance

MANAGING THE RECONCILIATION PROCESS

Accessing Reconciliations
Accessing Reconciliations from Dashboards Card

Accessing from Reconciliations Card

Worklist

Working with views

CREATING FILTERS AND SAVING LISTS

Creating Reconciliations

Checking for Missing Reconciliations
Preparing Reconciliation
Adding Attachments
Adding Comments
Configuring Questions
Working with Transactions
Adding Transactions

Copying Transactions from Prior Reconciliation

Adding Transactions Manually
Editing, Copying, and Deleting Transactions
Amortization or Accretion Transactions
Creating Amortization or Accretion
Transactions Manually
Copying Amortized or Accreting Transactions from Prior Reconciliations
Importing Amortizing or Accreting Transactions

Understanding Data Loads in Account

Reconciliation Cloud
Importing Data Using Data Management
Setup Tasks in Data Management
Workflow Tasks in Data Management
Define and Save a Data Load Definition
Executing a Data Load in Reconciliation Compliance and Viewing Results
Importing Pre-mapped Data

Importing Pre-mapped Transactions
Importing Pre-mapped Balances
Changing a Period's Status
Closing and Locking Periods

 

{tab-l1 Day 3 | red}

ONGOING ADMINISTRATIVE TASKS

Submitting, Approving, and Rejecting Reconciliations
Updating Reconciliations
Updating Reconciliation Attributes
Managing Reassignment Requests
Reassigning Prepares and Reviewers

Reopening Reconciliations
Using Teams
Claiming and Releasing Team Reconciliations
Performing Summary Reconciliations

 

ADMINISTERING TRANSACTION MATCHING - LERNING ABOUT TRANSACTION MATCHING

Understanding the Transaction Matching Engine

 

CREATE RECONCILIATIONS IN TRANSACTION MATCHING

UNDERSTANDING DATA LOADS IN ACCOUNT RECONCILIATION CLOUD

Importing Data for Transaction Matching

 

RUNNING AUTOMATCH

Searching Transactions
Exporting Adjustments or Transactions as Journal Entries
Creating Global Adjustment and Support Attributes
Defining the Journal Columns
Map Attributes to Journal Attributes

Exporting to a Text File in Jobs History

{tab-l1 Day 4 | blue}

CREATING A SAMPLE OR NEW APPLICATION

Creating an Application 
Creating  a Sample Application
Creating a New Application
Removing an Application

PERFORMING OTHER TASKS IN ACCOUNT RECONCILIATION CLOUD

 

USING REPORTS

Generating Predefined Reports in Reconciliation Compliance
Generating Predefined Reports in Transaction Matching

Generating Custom Reports

Creating a Query

Creating a Template

Creating a Report Group

Creating a Report Definition

Generating the Report

Using Report Binders in Reconciliation Compliance

Viewing Report Binders

Generating Report Binders

 

MIGRATING TO THE SAME OR A DIFFERENT ENVIRONMENT

Migrating using EPM Automate Utility
Migrating Using Navigator

 

MIGRATING FROM ON-PREMISE FINANCIAL CLOSE MANAGEMENT TO ACCOUNT RECONCILIATION CLOUD

 

ARCHIVING

 

USING EPM AUTOMATE FOR ADMINISTRATIVE TASKS

{/tabs}
{tab Enroll | grey}

 
 
 
 
 


{tab Training Hours | red}

Start Date: 25th May 2019

Training Schedule: 25, 26 May, 02 & 02nd June 2019

Timing: 12:00 NOON GMT | 08:00AM EST | 5:00AM PST | 7:00AM CST | 6:00AM MST | 5:30PM IST  | 01:00PM GMT+1

This training will run for 4 days over weekends

{/tabs}
{jcomments off}

 

 

DevOps Trend In 2019

$
0
0

In 2019, what we should expect next year (2019) in DevOps

 

DevOps adoption trend is growing rapidly in both digital landscape and enterprise with strong use of containerization along with the various available open source tools.

Initially we all thought that DevOps was just a buzzword and now we know it’s was just a myth. What I’m seeing is that organization main focus has been now around DevOps to ensure that they can shape their software development and operations.

 

DevOps adaption in brief:

  • ü  DevOps practices can be implemented at the individual application or project level, or all the way up to organization-wide alignment.

  • ü  Each level of DevOps adoption poses its own complexities and challenges.

  • ü  Many achieve DevOps success through an approach that covers four key areas and four key “audiences,” or stakeholders.

  • ü  The key tools listed below, methods and considerations for DevOps transformations have made it convenient for adaption.

 

 

 

Every organization practicing DevOps need a comprehensive strategy to achieve sustainable business value.

I see few facts which will play a very important role in practicing DevOps in the 2019 and future:

1. DevOps 6c’s

Understanding the 6 C’s of the DevOps cycle and making sure to apply automation between these stages is the key, and this is going to be the main goal in 2019.

Taking care of these 6 stages will make you a good DevOps organization. BTW, this is not the must have a model but a more sophisticated model out there. This will give you a fair idea on the tools to use at different stages to make this process more lucrative for a software powered organization.

CD pipeline, CI tool and Containers make things easy and when you want to practice DevOps, having micro services architecture makes more sense.

 

Image result for devops trend

 

2. Security

With various open-source tools in the market and its strong community base for adaption, security will play a very important role here.

Security thought should be right from the beginning of DevOps adaption and it’s no more and option but a mandate.

We should rather call DevOps as DevSecOps.

 

Related image

 

3. Automation

It’s not just automation we are looking at but more holistically automation in respect to Tools, Processes (Automated Code Analysis, Automated Security Code Scanning, Automated Build/Packaging, Automated Deployments), Testing (Automated Functional and Non-Functional Testing, Automated Test Data Management and Service Virtualization) and Integrated Monitoring &Operations would be the strong way forward. We are no longer looking at tools which cannot integrate or talk to each other. Processes which were manually performed would now require to be automated. Manual testing would be redundant and require more automated testing. Automated monitoring, self healing solution and operations is the way forward.

Automation not only increases speed and efficiency but eliminates human error and can work round the clock without fatigue.

 

Related image

 

4. Containerization

To make it more efficient and seamless implementation, containerization is the way forward. We are looking more at Docker & Kubernetes

 

Image result for containerization kubernetes

 

 

Image result for containerization docker

 

5. Artificial Intelligence and Machine Learning

Artificial intelligence (AI) and machine learning (ML) are going to change the world. AI and ML are perfect fits for a DevOps culture. They can process vast amounts of information and help perform menial tasks, freeing the IT staff to do more targeted work. They can learn patterns, anticipate problems and suggest solutions. If DevOps’ goal is to unify development and operations, AI and ML can smooth out some of the tensions that have divided the two disciplines in the past

 

Source: dzone.com, Linux.com, devops.com, cuinsight.com

 

Oracle Data Integrator Cloud Service Training

$
0
0

Oracle Data Integrator Cloud Service provides all of the functionality included in Oracle Data Integrator Enterprise Edition in a single heterogeneous Cloud Service integrated with the Oracle Public Cloud. Providing an easy to use user interface combined with a rich extensibility framework.

{tab Course Contents | orange}

{tab-l1 Day 1 | orange}

ORACLE DATA INTEGRATION PLATFORM CLOUD: OVERVIEW

Data Integration Platform Cloud
Benefits / Advantages of DIPC
DIPC Product Solutions
DIPC Editions
DIPC Architecture & Components

DIPC User and Roles

PROVISIONING AND ACCESSING DATA INTEGRATION PLATFORM CLOUD SERVICE

DIPC Provisioning Prerequisites
DIPC Provisioning
DIPC Console
Oracle Data Integrator

ORACLE DATA INTEGRATION PLATFORM CLOUD: AGENT CONFIGURATION

Describe Oracle Data Integration Platform Cloud (DIPC) Agent
Setup a DIPC Remote Agent
Start and stop the Agent

{tab-l1 Day 2 | green}

ORACLE DATA INTEGRATION PLATFORM CLOUD: DATA SYNCHRONIZATION

What is Data Synchronization
DIPC Process Flow Overview
DIPC Synchronization Steps and Data Validation

ORACLE DATA INTEGRATION PLATFORM CLOUD - STANDARD EDITION

Describe DIPC Enterprise Edition
Describe Enterprise Edition use Cases
Describe Oracle Golden Gate architecture and installed components
Identify OGG users and privileges for DIPC

{tab-l1 Day 3 | red}

GOVERNANCE EDITION - DATA INTEGRATION PLATFORM CLOUD SERVICE

Access EDQ on DIPC
Data Profiling, Data Transformation and Data Auditing
Describe Business Rules


TROUBLESHOOTING: DATA INTEGRATION PLATFORM CLOUD SERVICE

Invalid Cloud Storage Connection
DBCS Instance Association Issue
Job Execution
ODI Session Logs
Operator Navigator
EDQ Application Not Accessible

MIGRATION FRM ON PREMISE TO DATA INTEGRATION PLATFORM CLOUD SERVICE

Migration Prerequisites
ODI Migration from On Premises to DIPC
EDQ Migration from Premises to DIPC

{/tabs}
{tab Enroll | grey}

 
 
 
 
 


{tab Training Hours | red}

Start Date: 06th July 2019

Training Schedule: 06, 07 & 13th July 2019

Timing: 12:00 NOON GMT | 08:00AM EST | 5:00AM PST | 7:00AM CST | 6:00AM MST | 5:30PM IST  | 01:00PM GMT+1

This training will run for 3 days over weekends

{/tabs}
{jcomments off}

Oracle Analytics Cloud Service Training

$
0
0

Oracle Analytics Cloud provides the industry’s most comprehensive cloud analytics in a single unified platform, including everything from self-service visualization and powerful inline data preparation to enterprise reporting, advanced analytics, and self-learning analytics that deliver proactive insights.

{tab Course Contents | orange}

{tab-l1 Day 1 | orange}

INTRODUCTION TO BUSINESS INTELLIGENCE CLOUD SERVICE

Business Intelligence Cloud Service Overview
Introduction to BI Cloud Service

DATA VISUALIZATION ON OAC OVERVIEW

Accessing Visual Analyzer
Adding Data Source
Different Objects within Visual Analyzer
Adding Data Elements to Visualizations
Creating Visualization
Modifying the Visualization

UPLOADING AND BLENDING DATA

Characteristics of External Sources
Uploading Data form External Sources
Adding File Based Data
Blending Data
Refreshing Data
Updating Details of Data

MANAGING DATA SOURCES, CONTROLLING, SHARING  OF DATA

Controlling Sharing of Data
Removing Data
Deleting Data
Blending Data
Managing Data Sources

DATA WRANGLING

Data Wrangling Functions
Applying Data Wrangling Functions for Data Column
Applying Data Wrangling Functions for Measures
Applying Data Wrangling Functions for Attributes

{tab-l1 Day 2 | green}

DATA VISUALIZING CONTENT AND ADDING DATA ELEMENTS TO VISUALIZATIONS

Adding Data Elements to Visualizations
Adjusting the Canvas Layout
Changing Visualization Types and Properties
Reverse Visualization Edits
Refresh Visualization Content


EXPLORING DATA USING FILTERS, SORTING, DRILLING AND SELECTING

Creating Filters
Applying Range Filters
Applying List Filters
Applying Date Filters
Building Expression Filters
Exploring Data Using Drilling, Sorting and Selecting

CREATING CALCULATED DATA ELEMENTS AND BUILDING EXPRESSIONS

Creating Calculated Data Elements
Composing Expressions
Expression Editor Reference
SQL Operators
Conditional Expressions

LINKING VISUALIZATIONS AND BUILDING STORIES

Synchronizing Visualizations
Capturing Insights
Shaping Stories
Edit Insights
Include and Exclude Insights
Rearranging Insights
Sharing Stories


USING VISUALIZATION TYPES

Creating a Line Chart
Creating a Performance Tile

CREATING ANALYSES AND SORTS

Creating and Editing Analyses
Formatting Columns
Sorting Vlaues in Views

{tab-l1 Day 3 | red}

VIEWS IN ANALYSES

Displaying Data on a Graph
Working with Pivot Tables
Creating Calculated Items


USING FILTERS TO LIMIT DATA IN ANALYSES

Using Filters to Limit Data in Analyses
Grouping Filters to Limit Data in Analyses
Deleting Dashboard Pages


USING MASTER DETAIL VIEWS, MAP VIEWS, AND ACTION LINKS

Linking Views in Master-Detail Relationships
Map Views

CREATIG DASHBOARDS

Introduction to Dashboards
Creating Personal Dashboard
Creating Shared Dashboard

ADDING CONTENT TO DASHBOARDS

Using Dashboard Builder
Dashboard Builder Tools
Dashboard Properties

OAC CLOUD WEB SERVICES AND CONFIGURING ANALYTICS ON MOBILE

Introduction to RESTful Web Services
Architecture of Oracle Database Cloud RESTful Web Services
Creating a RESTful Service

{/tabs}
{tab Enroll | grey}

 
 
 
 
 


{tab Training Hours | red}

Start Date: 20th July 2019

Training Schedule: 20, 21 &  27th July 2019

Timing: 12:00 NOON GMT | 08:00AM EST | 5:00AM PST | 7:00AM CST | 6:00AM MST | 5:30PM IST  | 01:00PM GMT+1

This training will run for 3 days over weekends

{/tabs}
{jcomments off}


Docker V/S Vagrant

$
0
0

https://media.licdn.com/dms/image/C4E12AQEFpwzHUWYbhg/article-cover_image-shrink_720_1280/0?e=1565222400&v=beta&t=j3RGE6Cd_gYerSo3nRQDNBJCidcbb44xzbfMTphSPnM

Docker V/S Vagrant

Recently, I came across several people posting about the confusion between Docker and Vagrant, so I 

Thought let me share some light on this subject as part of my post.

Both docker and vagrant allows one to create predictable and repeatable development environments. But the main difference between docker and vagrant is that docker uses container technology while vagrant uses virtual machines. What it means is that, in container, only necessary properties like code, runtime, system libraries and tools are kept and the application separate from outside influence. Whereas in a Virtual Machine (VM), it comes with its own complete operating system and resource allocation. The host machine provides the necessary physical resources but the virtualized environment works as an independent machine with its own BIOS, CPU, storage, and network adapters.

So what does that mean in simple terms, the below picture depicts:

 

So let’s understand briefly about some of the difference more between VM and Container.

Virtual Machine

A virtual machine (VM) is more like a physical computer which comes with its own complete operating system and resource allocation. However its the host machine that provides the necessary physical resources but the virtualized environment works as an independent machine with its own BIOS, CPU, storage, and network adapters. The VM technology can be used with VMware, Oracle Virtual Box and many more.

Modern virtual machines run on hypervisors that are the software, firmware or hardware responsible for the creation and execution of VMs. There are a lot of hypervisors available in the market. KVM, Red Hat Enterprise Virtualization (RHEV), XenServer, Microsoft Hyper-V and VMware vSphere / ESXi are the prominent players.

Containers

Containers create virtualization on the operating system level and they work as an executable software package that isolates applications from its surrounding environment. This makes it less resource hungry and lightweight. The container has the necessary properties like code, runtime, system libraries and tools to keep the application separate from outside environment. It runs on the operating system of the host machine and shares libraries and binaries when possible and only separates the absolutely necessary resources.

Docker

Docker is an open-source container technology and it is quite popular because it makes it easier to create, run and deploy applications in a self-contained environment. Docker doesn’t create a whole operating system like a virtual machine. Instead, it uses the kernel of the host’s operating system and creates virtualization only for the application and necessary libraries. This approach makes it much more lightweight than virtual machines.

Docker Containers are created from Docker Images which basically are snapshots of machines. Users can easily start a container from an image. There are several images available from Docker Hub such as Linux, Apache, Python, Nginx and many making it more flexible for other members for not going through the same installation process. And the best part here is that it helps maintain a consistent environment for everyone. More info can be found at https://www.docker.com/

Vagrant

Vagrant is an open-source software product for building and maintaining portable virtual software development environments, e.g. for VirtualBox, Hyper-V, Docker containers, VMware, and AWS. It tries to simplify software configuration management of virtualizations in order to increase development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in a few languages.

Vagrant uses "Provisioners" and "Providers" as building blocks to manage the development environments. Provisioners are tools that allow users to customize the configuration of virtual environments. Providers are the services that Vagrant uses to set up and create virtual environments.

Support for VirtualBox, Hyper-V, and Docker virtualization ships with Vagrant, while VMware and AWS are supported via plugins. More info can be found at https://www.vagrantup.com/

What should you choose?

The short answer is that if you want to manage machines, you should use Vagrant and if you want to build and run applications environments, you should use DockerVagrant is a tool for managing virtual machines. Docker is a tool for building and deploying applications by packaging them into lightweight containers.

Conclusion

Both, Docker and Vagrant are both important and useful technologies allowing the developers to improve their productivity. For rapid development and code sharing, Docker provides an advantage.

 

 

DevOps Assessment

$
0
0

As I keep seeing all the time in this digital age, people are still figuring out “What is DevOps / DevSecOps / NoOps / DevTestOps etc?”, there is a good another half who have embraced it the journey with their DevOps.

But even from those who have already embraced DevOps journey, there are few really who are exploring “What are the low hanging fruits? How do we measure what we have achieved? How do we assess the maturity level of our DevOps? How to translate those assessments into numbers to make sense to CIOs and CFOs?”.

The answer to about could have been with DevOps Maturity Assessment followed by Value Business Case. The Value Business Case which I believe should also account for both intangible benefits of DevOps are in equal proportion to the tangible ones. So, when I look to assess the benefits of doing DevOps, I evaluate the value from intangible and tangible ones. This is what I assessed in my first assessment with one of the largest digital manufacturing company for their large enterprise transformation. 

To explain more on this journey, I with my few colleagues conducted several assessment workshop with all the work streams and gathered information in these areas:

1.  Step 1: DevOps Basic Information Scoring:

  • Establish Team
  • Establish Goals and Vision
  • Establish the Product Backlog
  • Understand Stakeholder Involvement
  • Setup Project Environment
  • Agree Definition of Ready (DoR)
  • Agree Definition of Done (DoD)
  • Create Initial Architecture
  • Identify Risks and Mitigation Strategies
  • Define Engineering Standards
  • Produce and Agree Release Level Schedule
  • Refine Product Backlog
  • Plan Iteration
  • Build Iteration Backlog Items
  • Test Product Increment
  • Conduct Daily Stand Up Meeting
  • Measure
  • Conduct Iteration Review
  • Conduct Iteration Retrospective 

1.2  Step 2: Service Management Scoring:

  • Release & Change Management
  • Incident Management
  • Problem Management
  • Security & Risk Management
  • Transition Management
  • Capacity Management

1.3  Step 3: DevOps Scoring

  • Configuration Management
  • Continuous Integration
  • Continuous Testing
  • Continuous Delivery
  • Continuous Monitoring
  • Environment Provisioning

 

 

1.4 Step 4: DevOps Value Business Case:

Deriving Value Business Case, following were few areas of information which I gathered

1.4.1  Tangible (Key Ones):

  • Effort Savings (Build & Release, Deployment, Testing etc)
  • Software Cost Savings (making redundant / open source / sunset tools)
  • Reduced FTE
  • Reduced Infrastructure

 

1.4.2  Intangible (Key Ones)

  • Productivity
  • Faster Communication
  • Faster Decision Making
  • High Team Morale

 

This approach model will definitely will help you assess the landscape allowing you to find the gaps in the process/tools/people and helping you to provide the best possible solution for filling the gaps identified.

Another important thought to carry along while reading this blog would be to bear in mind the fact that the value of DevOps or its benefits, could be vastly different from person to person. While for a Project Manager, it could simply mean how much efficiency and quality improvement it added to the release, Operations Managers would want to assess from the ease of deployment, a CIO may focus on application uptime and performance while CFO may be interested in cost savings etc.

There are ways and such tools available which can run agents on various machines to capture such data which will help in devising benefits of DevOps, but caution should be taken in choosing them as they are far too many and a lot of data collected from those tools is certainly not going to help in finding the answers, rather they will only add to the confusion. Before looking for any such tool, very carefully and with diligence, efforts should be made to identify the metrics which are of value to the person or group who is seeking evaluation. I’ll make an attempt to discuss few such tools and strategies in my next article.

Lastly, the benchmarks value captured in the entire process will help you to measure the effectiveness of DevOps adoption eventually and would be able to answer the questions asked initially which were “What are the low hanging fruits? How do we measure what we have achieved? How do we assess the maturity level of our DevOps? How to translate those assessments into numbers to make sense to CIOs and CFOs?”.

Please reach out to me in case you need any assistance / guidance in your journey.

 

 

Software Configuration Management - SCM

$
0
0

New IT DevOps is about establishing an industrialization which, on demand and under the pressure from the business, customers and other stakeholders, finds a way to work more productively and increase the quality of outcomes.

  • DevOps in New IT is also a key enabler of Agile. It establishes a rigorous process for all phases of software development to help deliver more predictable, agile, efficient processes and higher quality outcomes at every stage.
  • After the coding and manual code review tasks are completed, the remaining activities including Merging, Build, Static Code Analysis, Unit & Functional Testing are entirely automated in DevOps, which means deployment could happen in a matter of few minutes.
  • SCM plays a crucial role in DevOps to not only store its code base and artefacts but also define a code base policy to become a key enabler of Continuous Integration/Continuous Delivery.

Software Configuration Management (SCM) is the back bone of Continuous Integration and thus is critical to DevOps. As demand on software developers increase, and IT is under pressure from the business, customers and other stakeholders, there is a need to deliver like never; finding a way to work more productively and increase the quality of outcomes is essential. DevOps is about establishing that industrialization. DevOps is also a key enabler of Agile. Thanks to its ability to move rapidly and efficiently through the development lifecycle. To achieve all this, DevOps establishes a rigorous process for all phases of software development to help deliver more predictable, agile, efficient processes and higher quality outcomes at every stage. DevOps is transforming how software organizations operate taking web scale and fortune 500 companies to the next level of performance.

This article is to understand Why SCM, What is SCM, What is the right branching and merging' strategy for my SCM?, What tools to use? etc. This article from my understanding and few researches on this subject.

A computer program tells the computer or a set of systems how to work or what to do? When we envision computer programs we typically think of source code which has the ability to interact with layers of application, operating system and hardware. The concept of Software Configuration Management emerged from the need to control and re-use such computer programs.

The simplest definition of Software Configuration Management (SCM) is to control the software assets of an organization so they can be tracked, reported, improved & reused.

Why SCM?An simple view helps understanding bit more

 

 

This simplistic concept of version control has evolved over the years wherein every component of a computer software can now be controlled, provisioned and written as code. You can control a host of Configuration Items depicted below:

 

SCM forms the core of an application development life-cycle as depicted below:

https://cdn.softwaretestinghelp.com/wp-content/qa/uploads/2018/08/1.CONFIGURATION-MANAGEMENT.png

 

SCM has lived as a singular/segregated discipline for long and required years of expertise to do it right. While we rotate towards NewITSCM today forms the CORE of DevOps and is an integral part of the product development life-cycle. DevOps practices of Continuous Integration, Delivery & Deployment depend on a strong SCM process for its success.

The interconnected value streams of change management,source control and CI/CDstay hand in hand. Regardless of whether you are doing traditional (Waterfall/Agile) application development or running an advanced microservices factory with containerized application development, you will need to control artefacts at all stages of your software development. These artefacts can range from derived objects of builds, technology blueprints defining your continuous integration, delivery pipelines, configurations defining your deployments, container images running your tools and applications, or infrastructure templates you use to provision your environments.

In conclusion of this blog, let's remember 'everything in software evolution can be controlled, tracked & evolved'

Better SCM practices will allow you to effectively use, evolve & re-use your automation. 

SCM in theNEW is embedded in DevOps practices of Continuous Integration, Delivery & Deployment.

 

 

Branching & Merging Strategies (SCM - Part1)

$
0
0

1.1 What Is Branching

Branches in SCM are also known as Streams or Code lines. Like a branch of a tree is defined as a structural member connected to the tree which is not part of the central trunk of the tree, similarly in SCM, Branch is a part of the mainline, also known as trunk/main.

Branching provides facility to teams of developers to easily collaborate inside of one central work stream of SCM. It facilitates parallel development by providing the ability to work on two or more work streams without affecting others.

When a developer creates a branch, the version control system creates a copy of the code base or the tree at that point in time. Branches may be created for functional reasons too, for e.g., for new features, bug fixes, spikes, releases etc.

The most obvious reason for branching is to start an alternate line of development, i.e. one should branch whenever one cannot pursue and record two development efforts in one branch. The "isolation" purpose is made to isolate a development effort.

1.2 What Is Merging

Merging, in version control, allows you to combine two or more development histories together.  It is also termed as integration meaning integrating branches or versions of a file.

Most often, merging is required when the same file is modified on two different branches and the code/file needs to be merged from one branch to another to bring in the changes of the other branch. The result is a single collection of files that contains both sets of changes.

1.3 Codeline Policy and Branching Strategy

A Codeline Policy defines the set of rules which govern usage of a Codeline. Codeline policies are formed based on answers to questions:

  • How is the software released to market?
  • How is the development of the software planned?
  • What range of software packaging’s needs to be produced?

A Branching Strategy further defines how Codeline policies are to be implemented. Few common considerations in a branching strategy are:

  • Identifying configuration items of development that can be easily characterized
  • Identifying the differences and similarities between these items
  • Narrating how they relate to each other
  • Setting parameters for the issues relating to branches creation, interaction, and retirement

1.4 Branching Patterns

Branching pattern are established based on answers to following questions:

  • Why is branching needed?
  • Where should a branch begin?
  • What would the branch represent in the development environment?
  • What would be the Codeline policy?
  • What would be the merging policy?
  • What should be the branch lifespan?

1.5 Mainline Based Branching Pattern

A Mainline is the central Codeline of the project where the code starts its life. As it is the primary lifeline for development, it is central to and has an ancestral relationship to all successive developments.

Mainline based branching pattern can be used for smaller projects consisting of small development teams where everyone is dedicated to one release effort at a time. This works particularly well if there is only one platform for release or the platforms have little differentiation. The development or any kind of bug fix is performed on the mainline with this assumption that there are no chances of code getting destabilized, or no check-points are necessary. It is likely that the development changes would be accomplished entirely in the local view and checked in when finished.

If the project development is performed at multiple geographic locations, it may be reasonable to provide each location with its own mainline and may or may not synchronize their mainlines depending on the type of development. If it does need to be synchronized, then there will be a merge policy for mainline as well. Usually, the mainline will not have to deal with merge policies if it is a single line of development.

1.6 Multiple Mainline Based Branching Pattern

If a project has multiple products based on a common baseline, but otherwise independent of each other, then a Multiple Mainline branching strategy can be applied. In this kind of pattern, the trunk may hold more than one mainline, which would share the same core initially. However, the policies for these mainlines would be differ significantly in terms of their starting compositions and merge policies.

The trunk will have its own release schedule and it will be depending on the project's functioning as to how the trunk's release will be incorporated into products. The other mainline variants branch off from a well-defined version of the trunk's core initially and may be composed of some composite of the core release package their own course of development.

1.7 Release Based Branching Pattern [Major-Minor/Integrated-Minor Release]

Most of the projects need to deal with major or integrated, minor and patch level releases. In such cases Release Based Branching strategy could be used. The Trunk or Mainline holds the production code, i.e. product which has been released. Major or Integrated releases often are characterized by feature and function content changes demanding a significant development which may be accompanied by compatibility issues. It is branched off from the trunk on a specific version.

Minor releases may have new feature development like that of Major releases, however smaller in scale and less or without compatibility issues. These are smaller releases, meaning they durability is relatively small having smaller release schedules, smaller development teams, requiring less time for release completion.

 

1.8 Release Based Branching Pattern including Patch Releases

Patch Releases, also sometimes known as Hotfix releases, are sub-releases which contains minor fixes to the production code which has already been released. Regardless of any ongoing or future release efforts, patch release work focuses on fixing bugs which cannot be awaited until next Major or Minor release and hence are very short-lived branches.

 

A patch release may be branched off from the mainline when it has the latest production code (as shown in the above figure), or it may be branched off from the Release branch after it has been frozen and released into production (as shown in the below figure). In either case, once the patch work is complete and released, it should be merged back into the mainline.

 

1.9 Branching Pattern including 3rd Party vendor code

In cases where client projects work with 3rd party vendors too for development of software products, this kind of branching pattern may be used. It is assumed here that the 3rd party vendor does not use the same SCM Tool or same branching pattern. Instead the code developed by them is imported into the project’s owned source code repository from time to time. This scenario’s complexity may differ as per project’s needs and scalability.

The imported code may be a raw code which might need to be worked upon initially to set it up on the project’s environments. The Upstream branch is kept segregated as it is expected that there will be more imports of raw code from 3rd party vendor as and when development proceeds. The Setup branch is created where in the imported code is worked upon to set up before it is moved into Development branches. A Patch branch is created in parallel to Setup branch, where only patch release code of 3rd party vendor is imported which may not require the setup work. The Development branch which is branched off from Trunk has the ongoing development work wherein the 3rd party code is integrated as well.

The figure below describes a typical scenario of 3rd party code imported from Upstream branch into the successive branches of current Branching pattern.

 

1.10 Promotional Based Branching Pattern

In a Promotional Model, projects branch-off of each other and have their own set of releases. The logic of such a model may appear easy as the future versions of the product are seen to be built upon the past versions and the physical representation of the flow appears valid.

However, such models are not desirable models for an SCM strategy. It increases the complexity of merge policy when dealing with a Codeline of derivative development or distributed development.

Derivative development is the kind of development which is not performed for product release content but for other causes like proof-of-concept prototype, some research, customized versions; which may or may not be integrated with main product development. Distributed development which is carried out at multiple locations, though have same goals, may be tracked at each location independently and thus may require considerably more efforts to synchronize.

 

 

Branching & Merging Strategies (SCM - Part 2)

$
0
0

Branching Model

A Branching Model is a representation of a branching structure followed in a project’s SDLC. A branching model may comprise of one or more than one branches, and the workflow of the branching and merging would depend on the Software Development Methodology used by a project or an organization.

1.1 Various types of Branches

Branches can be of various types and names which may be created to serve different purposes. Few of the most popular Branching types are listed below:

  • Main branch: It is ideally the most stable branch which has the latest production code in it. No coding and development is performed here.
  • Release branch: The release branch is drawn out from the Main branch which has the latest production code i.e., code of last successful release. It contains the version one is currently locking down prior to the new release. Development will be very minimal or not at all on Release branch.
  • Develop branch: Develop branch is drawn out from the Release branch and is the one where actual development code is stored. Developers do their required changes here and nightly builds are performed on this branch. Code is deployed to lower environments and the software is tested on its quality, performance and functionality.
  • Feature Branch: Feature branches are evolved when a specific feature development is required and sub-teams are organized to work on the specific features independently without disrupting another team’s work. The feature branch usually exists until the specific feature of the software or module is in development. Once the feature development is completed successfully, the feature branch is merged back to the develop branch to add the new feature in the upcoming release. In case of feature development failure or an unsuccessful attempt of the new feature, the branch may be discarded without merging back into develop branch.
  • Hotfix Branch: A hotfix branch is branched off from the corresponding tag on the master branch that marks the latest production version. It arises from the necessity to act immediately upon an undesired state of a live production version, for e.g., when there is a bug in the production, or any other issue which needs immediate attention and cannot be waited until the next release.

1.2 Software Development Methodologies

A Branching Model will differ based on the software development methodology used by the project.

The Software Development Lifecycle (SDLC) is a framework which describes the different phases in a software development project. It consists of several phases and each phase has distinct activities, functions and responsibilities. SDLC is a framework for planning, creating, testing and deploying a software application or a product.

The SDLC phases in any project begin after the project is initiated or planned and end once the software or product is deployed into production or handed over to the customer/clients. The work completed in each phase produces results as artefacts that are used as inputs to the later phases.

Different software models implement tasks differently, like

  • Waterfall model
  • V-Model
  • Agile model

1.3 Waterfall Model

The Waterfall modelis a sequential approach model where the phases of Software Development Lifecycle (SDLC) are executed in a linear approach; i.e., the project can be in only one SDLC phase at a time. The current phase work is completed, reviewed and approved, and then the project can move to the next phase. And a phase once completed cannot be revisited. This type of model is a good fit when the requirements and the technology in a project are well understood, and the product definition is stable.

A Branching Model of a project following the Waterfall Methodology will have a sequential approach to its workflow with respect to branch creation/deletion and merging. For e.g., new branches may be created only when a previous branch is closed after a release has gone into production.

1.4 Agile Model

The iterative approach model follows incremental development, iterating to the SDLC multiple times during a project. Agile software models display iterative adaptive characteristics which make them very lightweight. Agile models are less rigid than sequential approaches as they have reduced documentation and can allow for rapid changes that may be inherent in a product’s lifecycle.

During each iteration, a functionality of the software is enhanced and the resulting application is demonstrated to the client/customer. The client/customer then provide their feedback which is incorporated in the system's requirements and the software is enhanced and developed accordingly. These iterations continue with the evolving versions until the full software application is developed which meets all the requirements.

Common agile practices include:

  • TDD (Test Driven Development)
  • User Stories (“As who, I want what and why?”)
  • XP (Extreme programming)
  • Scrum approach (consisting of Sprints)

A Branching Model of a project following the Agile Methodology will have an iterative approach to its workflow with respect to branch creation/deletion and merging. For e.g., parallel branches may be created for different releases in an Agile model.

1.5 Branching & Merging Model Timeline - Waterfall vs. Agile

Let us now see a Timeline Model which demonstrates how Waterfall and Agile Models differ in term of merge flow with respect to time. The small iconic “C” symbols represents the number of commits performed on the branches, “M” represents the merged code, while “R” represents a Released product or code. As shown below, Waterfall branching model takes a linear flow wherein new branches are cut-off only when the previous release has been completed, while Agile branching model works on parallel releases.

1.6 The Fortune in Your Ferrari– DevOps in Branching Model

A DevOps branch can be used as a bonus point in any SCM Branching Model. It is not mandatory to have a DevOps branch in a Branching Model. However, to have a placeholder in SCM for DevOps data can have several benefits some of which have been discussed in here.The DevOps branch can be created and kept segregated from other branches which have the code base. This branch will be responsible for storing the automated Build and Deploy scripts, several design and testing documents, release notes with the artifact and build version details, etc.

If the project consists of one or more applications which require several repositories to be maintained for the code, the automated build and deploy scripts might be specific to the application. In that case, a DevOps repository can be created and maintained to store the automation scripts of several applications.

1.7 DevOps Structure

DevOps branch or repository may have the directory structure as shown in the diagram below depending upon the project. The below diagram represents what kind of data can be stored in such a branch/repository and how does it benefit a software development project.

  • Environment: The Environment directory may contain the environment specific deployment or test scripts. Deployment scripts usually consists of tasks that push code source control to a non-development tier, or all the steps to move the code from development to production environment. The deployment tasks may include creating databases, modifying any environment specific configuration files, provisioning servers or other administration tasks.
  • Testing Environments: The purpose of the test environment is to provide a facility for human testers to manually test the new changed code or to run automated tests on the code. Different types of testing are usually carried out in different types of test environments. Automated scripts related to different test environments can be stored in their respective directories in DevOps branch.
  • DEV: A development environment is usually the programmer's workstation where the changed to software are developed or the code is programmed and tested. Development environment may include development tools like a compiler, integrated development environment, different libraries or support software etc. which will not be present in a production or end user's environment. Unit Testing is usually performed at Development environment where which individual units of source code, usage and operating procedures are tested. A unit can be called as the smallest testable part of an application. Scripts related to Unit Testing can be stored here.
  • CIT: Component integration testing is the kind of testing where data flow between two or more components are tested. It usually occurs after unit testing and before validation testing.
  • SIT: System Integration Testing is performed to test the overall functionality of a complete system which is comprised of many subsystem components. Here the data flow between two or more systems are tested to verify and validate that the system meets its requirements and performs in accordance with end user's expectations.
  • UAT: User Acceptance Testing is conducted to verify that the software will work for the end user. It is passed when SME (Subject Matter Expert), who can be client, accepts the solution. UAT can act as a final verification of the required business functionality and proper functioning of the system before it is released to live production environment.
  • PROD: The production environment is the live environment that end-users or customers/clients directly interact with. Deploying a code to production means releasing the product to live servers.
  • Design docs: This directory may comprise of the design documents specifying use of design pattern, code or design guidelines, User Interface standards, reports, Requirement Traceability Matrix documents, scope documents defining the high-level outlining of the scope of the project, functional specification documents detailing the functional requirements on a business level, detailed technical design documents, user guide documents etc. which are prepared throughout SDLC phases.
  • Build: This directory can comprise of build automation scripts having the process of automating the creation of a software build defined. The build scripts can include processes for compiling computer source code into binary code, packaging binary code and generating build artifacts.
  • Release Notes: This directory can comprise of Release Notes prepared for every major or minor release. These are the documents which contains all information about the new enhancements or the known issues of the final build that went into production as the latest release; it may also have the process of user guide and training materials. These can be prepared by technical writers or Quality Assurance teams and stored here.
  • Infrastructure Management: There are many DevOps tools which also help setup Infrastructure As Code. This means, all steps required to setup an infrastructure can be automated via Infrastructure Automation tools and they can be reused on multiple servers at one time. Few popular known tools are Ansible, Chef, Puppet. Such kind of infrastructure management needs to have scripts written, which can be also be version controlled and tagged for its stable and tested versions. Thus, there can be placeholder in DevOps branch or repository for Infrastructure Management code too.
  • Framework Management:Frameworks are used by developers which helps provide them with a conceptual platform to reuse a well-defined application programming interface (API). Developers may redefine or override the common code with generic functionality, or may extend the framework’s functionality. A placeholder in DevOps branch can help store and track versions of Framework management code.

1.8 Branching and Merging Models forProminent SCM Tools

There are many version control tools available in the market. Let us try to list a few popular tools and the branching and merging models for the same.

1.81. GitFlow based Branching Model

1.8.2 Mercurial based Branching Model

1.8.3 ClearCase based Branching Model

ClearCase has a variety of strategies that are tailored to their development environment. ClearCase provides the building blocks to build a successful branching environment. branching strategy will encompass not just creating branch types but also effective use of other ClearCase metadata (labels, attributes, triggers, etc.), view creation/administration/maintenance, configuration specifications, naming conventions, and an underlying policy or process to follow.

  • Incremental version history for each file.
  • Every file has a different history.
  • Every version has a predecessor.
  • Branching and merging between baselines.
  • The version 0 on a branch has the same contents as version at branch point. Version 0 on the main branch is defined to be empty.
  • Regardless of the approach to branching it’s good to merge early and often.
  • Branches where changes are eventually gathered helps facilitate the integration and release of software code.

1.8.4 Team Foundation Server (TFS) based Branching Model

  • Scenario 1 – No Branches: Your team works only from the main source tree. In this case, you do not create branches and you do not need isolation. This scenario is generally for small or medium size teams that do not require isolation for teams or for features, and do not need the isolation for releases.
  • Scenario 2 – Branch for Release: Your team creates branches to support an ongoing release. This is the next most common case where you need to create a branch to stabilize for a release. In this case, your team creates a branch before release time to stabilize the release and then merges changes from the release branch back into the main source tree after the software is released.
  • Scenario 3 – Branch for Maintenance: Your team creates a branch to maintain an old build. In this case, you create a branch for your maintenance efforts, so that you do not destabilize your current production builds. You may or may not merge changes from the maintenance branch back into the main tree. For example, you might work on a quick fix for a subset of customers that you do not want to include in the main build.
  • Scenario 4 – Branch for Feature: Your team creates branches based on features. In this case, you create a development branch, perform work in the development branch, and then merge back into your main source tree. You can create separate branches for work on specific features to be completed in parallel.
  • Scenario 5 – Branch for Team: You branch to isolate sub-teams so they can work without being subject to breaking changes, or can work in parallel towards unique milestones.

1.8.5 Subversion based Branching Model

1.8.6 CVS/PVCS based Branching Model

Concurrent Versions System (CVS) permits to create branches from other branches. Root branch will be treated as a trunk and manage the sub-branches using any of the branch styles depicted below. Nested branches are most useful when using CVS for configuration management. Branching policies helps in ensuring branching of the projects by using consistent rules Having and using consistent policies can also help you keep merging as simple as possible.

  • Develop policies that work for your projects and your team.
  • Have an overall design of the project.
  • Ensure each branch has a purpose.
  • Minimize the number of active branches.
  • CVS permits nested branches, but there are few situations where multiple layers of nesting help a project.
  • Consistent branch-naming scheme that expresses the purpose of each branch.
  • Tag the branch at every merge point.
  • Tag the trunk at every branch point and just before and after every merge.
  • Use consistent tag-naming schemes for the tags associated with the branches.

1.9 Trunk Based Branching Model

Trunk Based Development, also known as TBD, is a source code control mechanism where the single main branch "trunk" is used by the developers to work upon instead of using any other long-lived branches. Any other Release branches created from Trunk are usually short-lived, while the Trunk is the only long-lived branch. Developers are bound to make multiple commits in a day to trunk which helps avoiding merge hells or breaking builds. This in turn satisfies the core requirement of Continuous Integration Where several commits to trunk are performed every day.The code does not break a lot and is always releasable on demand which helps boost Continuous Delivery, thus making TBD a key enabler of CI/CD.

Thus, below points denote a typical TBD workflow:

  • Commits to master at least once per day
  • Remote branches are only for releases
  • Local branches are used by developers
  • Hotfixes are also committed to master
  • Hotfixes are cherry picked into supported releases

As per the survey data and graphical representations available over the internet, it has been observed that the concept of Release and Development branches were the most popular during the period of 1998 – 2005, however it reduced later as it required high maintenance for the long-lived Release/Development branches. The short-lived branches gained popularity with time. However, Trunk Based Development has been found to be the most stable form of development over the years, and it works well with short-lived Release Branches.

Based on the workflow of TBD, following points summarize its advantages over other branching models:

  • Trunk Based Developments are lean in nature. Teams can communicate often and effectively to streamline development
  • Release at any time
  • Smaller merge problems
  • Merge problems evolve early
  • Release branches are cheap
  • May be no need for geographically distributed development
  • Feature toggling and decision routing is used for feature development and deployment

As the complexity of a project/application increases, creating too many branches and maintaining the complexity of branching and merging strategy can be cumbersome and prone to more errors. The future of any project strategy should be to keep minimal branching vs. zero/no branching. Trunk based development is the ideal branching method which helps in avoiding “merge hell”. The code does not break a lot and is always releasable on demand which helps boost Continuous Delivery.

While using TBD branching pattern it is advisable to use peer code review tools or a pull strategy vs. push to ensure code quality and reduce the chances of bad code being introduced into trunk. While Continuous Integration – Automated Code Analysis and Unit testing phases helps ensure high quality of code, peer code review is another shift left technique and tools like Gerrit for automated peer code review.

Extensible Flexfields in Product Maagement

$
0
0

In this article, let’s see how to configure Extensible Flexfields in Oracle Product Management.

Caution:  Before proceeding with this setup, one must know that this feature is available only for users with Product Hub License.  However Descriptive Flexfields is available to all License types.

Extensible Flexfields (EFFs) are new type of Flexfields that are similar to Descriptive Flex fields. DFFs have limitations 1. They can appear only on the same page 2. The number fields are limited to the attribute columns provided in the table.

With the help of multi-page feature EFF can overcome above limitations. Number of pages that can be created are theoretically unlimited.

Let us use an example to illustrate this.

FlashGate is a multi- national company that manufactures storage devices and computer hardware devices. They want to use Oracle Cloud to maintain their Item Master information. Oracle Product Data Hub uses general item specifications to store item attributes. To store information of a flash disk (say) one can create User defined Attributes (or EFFs).

Step 1:

Capture the requirement and have a high level blue print on how the User attributes should be defined.

Features: capacity in GB, Color

Other Details: Warranty and Date of Manufacturing

Step 2:

Determine the attributes and nature of data

Step 3:

Setup Flexfield

a. Navigation

Define the Extensible Flexfields  by navigating to setup and Maintenance

b. Create Context

Also referred as attribute group. Comprises of segments that makes the grouping meaningful.

c. Create Segment

Segments are individual entities that are present inside a Context. Segments can be any of the four data types provided.

  1. Character
  2. Number
  3. Date
  4. Date and Time

d. Create corresponding Value Set

Value sets are predefined group of values that belong to the same type. Value sets could be any of the four types.

Format Only – Text Box

Independent – Used for drop downs

Dependent – list of values derived from independent value of other attribute

Subset – Subset of values of existing independent value set

Table  – Values are derived from a table column using SQL clause

e. Associate Contexts to Category

f. Create Page and associate Contexts

EFFs allow creation of multiple custom pages on a standard page. Here we are creating two pages on standard Item page.

Flexfields created should be associated to Pages. Most times consultants create Flexfields and forget to associate to Pages. In these cases flexfields will not show up on the user interface.

Step 4:Deploy  Flexfield

Step 5:Verify Flexfields

Navigate to Product Information Management and then click on Manage Items. Search for any item and check for specifications. Define values and save the item.  In case the flexfields are not visible, it is always recommended to logout of the application and launch it in a new window.

Hope you liked this article. Watch out this space for more articles.

 

 

 

Import a GL Journal to Fusion Financials using ADFDI

$
0
0

Writing every line to a journal is not always possible.Instead of writing many lines of journal entries,a journal can be imported from the Spreadsheet directly through ADFDI.

The following example will guide you better,step by step -

  • From Task Bar choose Create Journal in Spreadsheet

 

  • Connect Spreadsheet to the Instance Access

  • Fill in Details regarding the Ledger and Journal

  • Write the Journal Lines

  • Enter Journal Lines

  • Post Journal - In order to post Journal ,use Submit Journal

  • The final step - Confirmation Message that the Journal was posted

 

 

 


SRE or DevOps

$
0
0

https://media.licdn.com/dms/image/C4E12AQFe_NMBiaGz9Q/article-cover_image-shrink_423_752/0?e=1570060800&v=beta&t=BcEmiIgTYPrcIaTSzp5ccuOQ0_v0E3KSA4K5UQk8YKg

In my recent project experience, I understood customer is more looking for SRE than a just DevOps Engineers. Obviously, I knew there were few similarities but what where the actual differences is what confused me, so thought writing a brief article just describing the basic differences between them.

I think the primary difference between the both is around whether you oversee the WHAT or the HOW. Where the former relates more to DevOps Engineer and latter to SRE.

DevOps

So, what I understand is that DevOps is a set of practices that automates the processes between software development and operation teams allowing them to build, test, and release, monitor software faster and more reliably. DevOps builds a culture of collaboration between teams that historically functioned in relative silos. It also benefits to include increased trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.

SRE

There is a book by Betsy Beyer, Chris Jones, Jennifer Petoff & Niall Richard Murphy called as Site Reliability Engineering, it provides his view on what SRE means, how it works, and how it compares to other ways of doing things in the industry. Basically stating "what happens when a software engineer is tasked with what used to be called operations".

According to Wikipedia, "SRE is a discipline that incorporates aspects of software engineering and applies them to infrastructure and operations problems. The main goals are to create scalable and highly reliable software systems.

Differences

For me both sounds pretty much similar which is causing most of the confusion.

There seems to be a thin line between DevOps & SRE where the former defines the overarching principles of WHAT the development team should expect from operations and vice-versa, allowing understanding how their work influences the other. DevOps does not define the HOW of this process and this is where SRE comes in. So for me there is this thin line between both of them.

One could equivalently view SRE as a specific implementation of DevOps with some idiosyncratic extensions.

Summarizing

  1. SRE is a role where DevOps is more of a culture. Latter defines WHAT needs to be achieved and former defines HOW it can be achieved.
  2. SRE's practice DevOps since they are focused on aspects on HOW, like availability, monitoring, and scalability.

 

Continuing Trends for DevOps

$
0
0

Seeing more closely

Momentum for DevOps already well underway in many organizations in the last few years. What I've seen that specially late 2018 and 2019, DevOps has really become the focus area for many organizations who are looking for a change in they work and build, test, deploy and release their software to the market. Having said that, I see some trends where customers are really trending towards now and future.

Serverless Computing

Serverless computing is the way forward and how it can change the way applications are being developed, tested, and operated. With Serverless Computing Companies can focus primarily on application development, while server provisions are taken care of by cloud providers and managers. Serverless Computing allows companies to pay for their usage which can significantly reduce their operating cost. Serverless Computing allows organizations to achieve professional agility and efficiency in the long run. More details can be found on Wikipedia.

Embedding Security

One of the high priority of any application software would be to ensure the software is not prone to any security breaches. Embedding Security is most effective when it is planned and managed throughout every stage of software development life cycle (SDLC). Embedding Security as part of the DevOps pipeline will ensure:

  • Existing people, processes and tools to successfully drive security requirements into solutions
  • Enable development teams to succeed in creating secure application
  • Secure applications from plan and design phases to on-going operations and retirement
  • Embrace new technologies

More details can be found on Wikipedia.

Everything as Code

With how the infrastructure these days is provisioned using code, there is no denying the fact that code scripts is the way forward. Today's modern tooling ecosystem that operations and DevOps engineers need to understand how almost everything can be created / re-created using just code. Some of the examples include:

  1. Configuration management - Chef, Puppet, Ansible, SaltStack etc
  2. Containers — Docker or other Open Containers
  3. Infrastructure as Code — Terraform
  4. Continuous Integration (CI)/Continuous Delivery(CD) — Jenkins and the Jenkinsfile, Travis CI and the travis.yaml file, TeamCity and Kotlin etc

Automation Everywhere

Artificial Intelligence and Data Science has become a game-changer in this modern era. AI is almost embedded in most of the application these days and is enabling DevOps teams to seek out automation possibilities to find opportunities within their workflow streams. As DevOps matures within the organisation and the amount of data generated increases day by day, it can lead to many insights which can then be used by artificial intelligence and data science for generating useful insights.

Microservices Architecture for DevOps

These days, Microservices is one of the most preferred architectures for software development. Since it follows the idea of breaking down an application into decoupled, independent modules that makes continuous deployments seamless. Once an application module is ready, it can be deployed to maintain a continuous delivery pipeline. The DevOps and Microservices collaboration will help the development team to eliminate dependencies, which will ultimately lead to faster product delivery in the market.

Conclusion

Day by day DevOps is improving with the emphasis on process improvement, ease of operation and faster feedback. With the above strong trends, we would see a rapid shift on how we would deliver and operate software going forward.

 

Automation Maturity Stages - POV

$
0
0

Automation Maturity Stages - POV

As we know digital transformation continues to shape within our organisation, senior leadership are trying to gauge the benefits and value of automation in reducing errors and higher productivity. The goal for them is always to lower operating cost, risks, reduce errors and faster execution (productivity).

We are currently moving towards a digital age where it is becoming extremely important to modernize enterprises at a very rapid pace. New business models are being evolved ever so fast and there is a business needs to fund these rapid innovations. Enterprises are still allocating 60%-70% of their IT spend to “run the business”, with the remaining to “transform the business”. Hence, there is an urgency to bring in efficiency to “run the business” through automation.

From my few experiences on journey to mature automation within the organisation typically fall into one of the following five stages:

 

Level Zero: Adhoc

At this initial level of maturity, automation is not typically a planned activity but happens in more reactive/adhoc basis. There would be few adhoc scripts and some internal or opensource tools that are available at the team’s disposal. The automation tasks would typically include runbooks, automation of some standard operating procedures with scripts or scheduled jobs for routine tasks. Requests are obtained from various sources within the organisation, completed inconsistently without having any process documentation. Difficult requests are often directed to individuals who have assisted with similar requests in the past.

Level One: Opportunistic

Level one we are looking at automation initiatives that are being executed to address an identified area of improvement or for addressing a problem. In this stage, customers have a formal method of requesting services, but it isn’t fully followed or enforced. At this stage organisations have limited processes and tools, and are still highly reactive, relying heavily on a specific individual’s knowledge. One of example could be a night job involving server monitoring and producing an hourly report. At this level we are looking at some form of formal evaluation of tools, some budget allocation to address automation with team level management focus.

Level Two: Practiced

Level two is a stage where we are looking for project / program level automation initiatives executed with defined automation targets with specific metrics. At this level it is more proactive initiation with exploring areas of automation to achieve a level of defined efficiency, productivity or quality. At this stage there would be exploration of multiple automation tools such as RPA, DevOps and even some level of custom-developed tools to suit the program / project. One would also find backlog of automation required to be done along with prioritization.

Level Three: Accelerated

At this level, automation is no more an initiative but it becomes the way of organisation standard way of working. Automation initiatives are taken up at the organisation level with a portfolio of platforms and tools. Automation strategy with organisation-wide knowledge and drives are well defined and documented. Enterprise-wide automation platform is accessible to all teams across the organisation. In this stage, organisations are focused on process execution and excellence with KPIs centered around IT, instead of the overall business.

Level Four: Optimised

At this highest level of automation where the automated process are optimised, they become adaptive to the demands of the business. All projects / programs have been automated and aligned with business initiatives, directly affecting business outcomes. At optimised stage, we are also looking at technologies / tools which support self-learning, self-healing auto-scaling and auto-optimisation methods across various processes. This level of automation is driven by strategy and vision established at all levels of the organisation. A road-map is well established to achieve the set goals as they involve; this will not only help in adopting technologies, but also transform the way in which systems support the business and processes.

 

Oracle Enterprise Data Management Cloud Service (EDMCS) Training

$
0
0

Oracle Enterprise Data Management Cloud governs structural changes across applications while preserving business context, and simplifying secure data sharing among end users to accelerate cloud deployment, assimilate acquired entities, and align across silos to build an authoritative source of enterprise information assets. You can use Enterprise Data Management Cloud as a standalone service or a business process within Enterprise Performance Management Cloud.

{tab Course Contents | orange}

{tab-l1 Day 1 | orange}

OVERVIEW OF EDMCS

ON-PREMISE DRM Vs EDMCS

TERMINOLOGY OF EDMCS

Nodes & Types of Nodes
Node Types
Hierarchy Sets
Node Set
View & View Points
Properties
Request

{tab-l1 Day 2 | green}

WORKING WITH APPLICATIONS

Understanding Application Types
Understanding Registering Applications
Understanding Dimensions
Understanding Modifying Applications
Importing Dimensions
Exporting Dimensions

WORKING WITH PLANNING APPLICATIONS

Registering Planning Applications
Registering Cubes, Application Settings, and Dimensions
Predefined Properties for Planning Applications
Importing Planning Dimensions
Exporting Planning Dimensions



WORKING WITH ORACLE FINANCIALS CLOUD GENERAL LEDGER APPLICATIONS

Registering Oracle Financials Cloud General Ledger Applications
Registering Segments and Trees
Applying Registration Changes
Predefined Properties for Oracle Financials Cloud General Ledger Applications
Importing Oracle Financials Cloud General Ledger Dimensions
Exporting Oracle Financials Cloud General Ledger Dimensions

{tab-l1 Day 3 | red}

WORKING WITH CUSTOM APPLICATIONS

Registering a Custom Application
Adding, Removing, or Modifying a Custom Application Dimension
Custom Dimension Import and Export Settings
Adding or Modifying a Node Type for a Custom Dimension
Importing a Custom Application Dimension
Exporting a Custom Application Dimension


WORKING WITH VIEW & VIEW POINTS

Creating View & Viewpoints
Inspecting & Downloading a Viewpoint
Sharing the data between viewpoints
Comparing Viewpoints
Alternate Viewpoint
Subscription
Mapping Viewpoint
Archiving & Deleting View & Viewpoint

{tab-l1 Day 4 | blue}

WORKING WITH REQUEST

Making Changes Interactively through Request
Making Changes Using a Load File
Action code for Load Request file
Inspecting Requests

SECURITY

My Services Roles for EDMCS
Creating User & Groups
EDMCS Component Level Security

WORKFLOW & APPROVALS

Workflow Stages
Approval Process
Process Flow

DAILY MAINTENANCE & MIGRATION

EPM AUTOMATE COMMANDS FOR EDMCS

QUESTIONS & ANSWERS

{/tabs}
{tab Enroll | grey}

 
 
 
 
 


{tab Training Hours | red}

Start Date: 12 October 2019

Training Schedule: 12, 13, 19 & 20th Oct 2019

Timing: 12:00 NOON GMT | 07:00AM EST | 4:00AM PST | 6:00AM CST | 5:00AM MST | 5:30PM IST  | 01:00PM GMT+1

This training will run for 4 days over weekends

{/tabs}
{jcomments off}

 

Branching & Merging Strategies (SCM Part - 3)

$
0
0

Security in SCM

1.1 Introduction

Security in the Software Configuration Management should aim to identify the misconfigurations or the items that make the systems vulnerable and track any unusual changes or "bad" changes to critical data. Organizations can quickly identify a breach if there are some standard configurations setup such as corporate hardening standards for their systems which can continuously monitor for indicators of compromise or create some baselines to identify changes to these standards.

Security aspects in SCM should provide with a complete set of security functionality including:

Authentication and Authorization: To ensure only authorized members and perform specific changes in the whole development process. The system should be able to safely authenticate its users. Not only this, what kind of authentication mechanisms are used, that is of essential importance too. If the system uses tokens like passwords, proper passwords protection mechanisms should be used over a network such that it is not easily readable to accessible to anyone else other than the intended owner of it.

Confidentiality and Integrity: The system should maintain the confidentiality of the data it stores such that only authorized members should be able to read/write/update information based on the permissions provided to them thus aiding towards unintentional or malicious repository corruption.

Access control: To ensure threats related to confidentiality of the project development is measured from time to time as the project grows and is properly handled. The system should be designed in such a form that the privacy of a data is appropriately handled, i.e., it should not be possible to easily retrieve data that users want to protect, such that either such data should not be stored, or if required to store, should be stored in such a way that it does not spread.

Availability and Trust: Security in SCM should ensure how well is the system available to those who need it, and how protected is it to malicious attacks. Not only availability, the system should make sure that its communication with its authorized users is secured on a trusted path.

Accountability: By recording any action in the SCM process. Any unauthorized action to be highlighted along with the frequency of its occurrence, and root cause analysis to be done on any security threat followed with a plan for required action.

Audit and Accuracy: Audit mechanisms to be followed by monitoring any changes between different versions of objects, observing the history of any configuration item and ensuring that any change process is documented and accurate as per the defined policies.

Self-protection and recovery mechanisms: The system should be able to protect itself and if faced with any malicious attacks it should be able to fix and recover itself or be able to shutdown/pause immediately by notifying the concerned as early as possible so that appropriate action can be taken. The timestamps, change sets, history of data or logs of the system should be trustable and reliable.

Record history: Security in SCM should not only protect the "current" versions of the software but also its previous versions so that if any previous versions is needed it can recall that version correctly. Any changes made to the software should be recorded, even the "undoing" of any changes should be recorded. If very old history logs need to be archived, proper archiving and protection mechanisms should be designed and used.

1.2 Authorization & Authentication

The Authentication and Authorization functionality should be Role driven in SCM whereby following roles can be defined.

Developer: A Developer (or user having "Developer" role) is responsible for the development of modules (single/multiple), features, or any enhancements of the software project code. A developer can perform tests in his/her local environment which may be known as "Dev" environment.

Reviewer: A Reviewer is responsible for reviewing the code of several Developers before it can be integrated or merged with other modules of the software project, or before it is approved for next level of testing or development.

Integrator/ Integration Manager: An Integrator approves and merges several modules of the software product checked in by the developers and performs builds to ensure sanctity of the merged code. He or she is responsible for integration testing as well.

Build Manager: A Build Manager is responsible for performing builds, managing the built versions and overlooking the completion of the allocated model/project/module or function; for rebuilding any released product versions and to notify the integrator or developers if any issue found in any of the built products through any defect management system.

SQA Manager: The SQA Manager tests the quality of the product by performing some final quality tests of the developed product. These tests can be functional tests or User Acceptance tests, Operational tests, etc. SQA Manager may then mark the product version to be released.

System Administrator: A System Administrator is responsible for the overall administration of the SCM system including administration of any resources of SCM, archiving of any built product versions, security aspects in SCM, and establishing backup policies and maintaining backups of resources.

This functionality can be integrated with ADFS (Active Directory Federation Services) to control user access. ADFS a software component developed by Microsoft which can operate on Windows OS. It is used to provide end users with a single sign-on access (SSO) to various applications or systems located across corporate boundaries, outside the organization's firewall. ADFS is built upon Microsoft's traditional Active Directory technology which is used to store usernames and passwords of an organization and uses this data to manage and secure access to workstations and servers on a Windows domain. ADFS helps reduce the complexity around password management and guest account provisioning by providing the SSO facility.

1.3 Security with Version Control Tools

A Repository is like the central database containing the complete history of all files in the project and shared by all members of a project. It is structured much like a file system containing a hierarchy of nested directories and files. Repositories should be set up on the local disk of the server and access to the server is very well secured. To establish security, one should never attempt to share a repository between computers by placing it in a shared location on the network. Instead, the SCM tools should setup the security mechanism using suitable protocols for authentication and safety of the repositories.

Various version control tools have clients who generally access a repository by communicating with a server. The format of this communication is termed the protocol. Protocols like HTTP and HTTPS are best suited to access repositories available via public networks such as the Internet. Some servers might provide access via multiple protocols, e.g. projects providing HTTP for anonymous and read-only access and HTTPS for write access which may be limited to defined roles. The version control tools can be setup on a webserver like Apache web server using http/https protocols or on web-host tools offering the secure protocols like GitHub/Bitbucket etc.

1.4 Security Types

SSH (Secure Shell): SSH encrypts all data that it sends between clients and servers and allows you to authenticate with either a username and password, or by using certificate-based authentication. It has become the de facto standard when communicating with UNIX/Linux servers and network devices, such as routers and switches.

TLS (Transport Layer Security): TLS is the standard method of encrypting client/server data that starts with a key exchange, authentication, and the implementation of standard ciphers. Many IP-based protocols, such as HTTP (HTTPS), Simple Mail Transfer Protocol (SMTP), Post Office Protocol version 3 (POP3), File Transfer Protocol (FTP), and Network News Transfer Protocol (NNTP), support TLS to encrypt data.

LDAP: The version control tools can be integrated with LDAP (Lightweight Directory Access Protocol) which allows to query items in any directory service that supports it, like Active Directory service (AD). It can provide all sorts of functionality like authentication, group and user management, policy administration and more. LDAP is a protocol for administering the data of a directory service.

Web-hosting Tools: Apart from integrating the version control tools with 3rd party tools, there are also some hosting tools which provide authentication mechanisms embedded into it. Depending on the SCM architecture, organizations can make used of such web-hosting tools as well.

2. Distributed Version Control System (DVCS)

A Distributed Version Control System is a form of version control where there is no single central version of the codebase, but rather the codebase is mirrored at multiple sites or on every developer's machine. Following diagram is an illustration of a Distributed Version Control System for an enterprise team which is a multi-site project. Each location has its own set of repositories which developers can read/write and check-in their code, and when required to combine the project with other's development, they can push/pull between sites.

A Distributed Version Control System at Scale is depicted in the below diagram. Following are its features:

  • Extremely faster pushing and pulling of change sets as the tool only needs to access the hard drive, not a remote server.
  • Commit new change sets locally and then, once the group of change sets are ready push all of them at once.
  • Pushing and pulling without an internet connection.
  • Isolated development.
  • Individual developer has a full copy of the project repository, changes can be shared with one or two other team members at a time for peer review.
  • Integrated with DevOps CI/CD platform.

2.1 Challenges of Distributed Development

Distributed Development is best suited for modular projects or Model Driven Development (MDD) where different teams can work on specific modules in an independent manner thereby providing interoperability, re usability and maintainability, through different languages and platforms, which in turn helps improve software quality and developers’ productivity.


However, there can be some challenges in Distributed Development which can be addressed.

  • May fail at intercommunication: Communication is the key requirement s distributed projects as it must be open to all involved and not just be limited to one single location. If the multiples teams across different locations may fail at intercommunication, it may lead to delay developmental processes and affect timelines.
  • Requires high management: Distributed development is one of the highest forms of collaboration and thus it requires high management capabilities, a highly efficient infrastructure and a well-defined organization chart with measures to ensure compliance without which Distributed projects may fail.
  • Few programming practices fail: Few programming practices like ‘Pair programming’ tend to fail in distributed development if peers are based at different locations.
  • Concern for Security: Security is a concern when dealing with distributed development. Higher security measures are required in such projects.
  • Productivity may decrease: Studies have resulted that all employees working in the same office are able to coordinate their work better and are more productive. A distributed environment needs to ensure frequent interaction not only for good coordination but also to ensure that the development is on right track else the results may deviate.

3 SCM for Cross-Platform Developments

Cross-platform development is the practice of developing software applications or products which can be operated on multiple platforms or environments so that such applications are independent of platforms and can work well in any habitat. Different strategies are used to accomplish such cross-platform software developments. For e.g., there may be compilation of different versions of the same program for different operating systems, or they may be platform specific files stored to be used for their respective operating systems, or developers can make use of modern application programming interfaces (APIs) to adjust a piece of software to a specific platform.

Cross-platform version control Tools:

There are several Version Control tools that support various platforms and work well to store source code of complicated cross-platform software projects, except for a few ones like Microsoft Visual SourceSafe (VSS) which is a Windows only tool.

Listed below are few popular cross-platform version control tools having different types of Repository Models. The types of platforms supported by different tools are listed alongside. Information have been gathered from source: https://en.wikipedia.org/wiki/Comparison_of_version_control_software

 

4. SCM for Containers and Microservices

Microservices refers to microservice architecture which uses suites of independent services to develop any software, mobile or web applications. These services are created to server only one specific business function which are independent of each other and does not necessarily have to be written in the same programming languages or use the same data storage. The microservices use lightweight HTTP, REST or Thrift APIs for communicating between themselves. Microservices are independently deployable and thus can offer more flexibility in various dimensions.

Since Microservices are independent of each other and may use different platforms, containers work best with microservices as they provide the flexibility to use different platforms by making applications very light weight and easily scalable.

Shown below is a software architecture style, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled and focus on doing a small task.

Following points denotes the architecture:

  • Granular, Decentralized, Disposable, Evolutionary Design
  • Fine grained, Distributed
  • Cloud native, API First
  • Principles – Evolutionary Design, Resilience, Automation – Everything as Code
  • Config files, code of microservice are version controlled in SCM tools like GIT
  • Docker, docker files and any other template files like compose YML or Kubernetes deployment files can be versioned controlled in SCM tools like GIT

 

 

Viewing all 930 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>