Monday, July 27, 2015

Compounding

compounding attribute are nothing jst a combinaiton of two char to make a unique one i mean none of them is existant without the other or in other words there is no meaning if the are separated

Typically in a organization the employee id are allocated in serial like say 10001, 10002 and so on. Lets your Organization comes out with a new employee id scheme where the employee id for each location would start with 101. So the employee id starting for india would be india/101 and for US would be US/101. Now note that the employee india/101 and US/101 are different. Now if someone has to contact employee 101 he needs to know the location without which he cannot uniquely identify the employee. Hence in this case location is the compunding attribute.

A maximum of 13 characteristics can be compounded for an InfoObject. the characteristic values can also have a maximum of 60 characters. This includes the concatenated value, meaning the total length of the characteristic in compounding plus the length of the characteristic itself.



Friday, July 24, 2015

Data Loading

Lo related datasources.


  • New Datasource
    • Checking : Check the existing setup table to see if data is present in the setup tables. Go to SE16 and give the setup table name
      • MC<Application Number>M_0<datasource>SETUP
        • MC02M_0ITMSETUP
    • Deleting : If data is present in the setup tables, delete them and re-load them again. Remember  that set up tables are filled for the whole application and not for each datasource.
      • Set up table data will be deleted for whole 'Purchasing' if deleting for '02' even though you want data reload for only Purchasing Item datasource 2lis_02_itm.
        • T-Code : LBWG 
        • Or SBIW -> Settings for Application-Specific Datasources(PI) -> Logistics  -> Managing Extract Structures -> Initialization -> Delete the contents of the setup tables  
    • Filling : Go to corresponding t-code for each application (unlike deleting, filling setup table t-code is different for each application or Goto SBIW ->  SBIW -> Settings for Application-Specific Datasources(PI) -> Logistics  -> Managing Extract Structures -> Initialization -> Filling in setup tables -> Application Specific setup of statistical data -> (choose your application)
      • Depending on the screen you get over there, selections can be given and data can be filled to the setup tables or you can run it wide open.
      • Remember to run in background, so that its easy to monitor and know the status. The job will be scheduled under job name 'RMCENEUA'.
      • Once the job is completed, the data is available to load to BW Via 'Full Update'
  • Existing datasource
    • Once the new field is added to the datasource, there are two ways to go.
      • Historic data is not required for the new field
        • So we need not do anything. Just add the field and the code if needed any in the exit and  let the regular process chains get the delta and fill the data
      • Historic data needed
        • For this we need to do a full load. So we need to follow all the steps as for a new datasource and fill the history.
        • Looks for downtime ( when the delta's are not running) Fill the setup tables and do a full load to BW and to the infoprovider. It will take care of the existing data and also writing data to the new fields.
        • Delta need/will NOT be disturbed. No Initialization is required. 

Thursday, July 23, 2015

BI Content

Steps to follow while installing BI content
-----------------------------------------------------
Select the objects which are needed and drag them to the right side.

In grouping , make sure you are doing only necessary objects.

If upwards and downwards data flow is needed  (transformations and dtps etc), identify the object from the list , go back and search for the object in the left side - drag it individually and install it.

Go for Tree View instead of Hierarchy View ( which is the default view ) , to see a better view of the objects.



Remember, clicking on the parent, will select all the objects below it, even though they are not selected. you need to take care and select individual objects to install them.


Right click and select 'Donot Install any below' to make sure.


Better option would be to get the technical names of the individual transformations or dtps, whaterver needed from the flow and then install each object individually searching for them in the contect. This way, you are installing only necessary objects.


Datasources

Replication :
Goto RSDS -> Datasource and Source System -> Replicate

Wednesday, July 22, 2015

Purchasing

Purchasing


Link for information in help.sap.com
Information from SAP content

Datasources


Setup Table Structure                : MC02M_0(HDR)SETUP
Deleting Setup tables                 :  LBWG - Application : 02
Setup Table deletion job name   :  RMCSBWSETUPDELETE
Setup Table Filling Job Name      :  RMCENEUA
Filling setup tables                     : OLI3BW

Points to Remember

  • Purchasing Extractors  do not bring records which are incomplete. i.e. EKKO-MEMORY = 'X' - Incomplete.
  • PO Item marked for deletion in ECC but the same is not reflecting in BW via delta/full loads. Click HERE


Tables
EKKO
EKPO
EKBE
EKKN


Transactions
ME23n
ME2L ( Account Assignment )


2LIS_02_ITM

SYDAT is Item created on - EKPO-AEDAT.
In LBWE, it says EKKO-SYDAT but it is from EKPO (*)

2LIS_02_ACC

Base Tables 
EKKO - Header
EKPO - Item
EKKN - Account Assignment in Purchasing Document

2LIS_02_ACC- NETWR
Even though the extract structure shows EKKN as the base table for this field, it is actually  calculated using a formula ( Check SAP notes Link for more )

For documents with specified purchase order quantity, such as purchase order documents, the net purchase order value is calculated as follows:

NETWR = EKPO-NETWR / EKPO-MENGE * EKKN-MENGENet purchase order value = Net purchase order value of item / Item quantity * Quantity posted to account


For documents with target purchase order quantity, such as contracts, the net purchase order value is calculated as follows:
NETWR = EKPO-ZWERT / EKPO-KTMNG * EKKN-MENGENet purchase order value = Target value of item / Target quantity of item * Quantity posted to account

2LIS_02_ACC-LOEKZ
In the extractor structure value of LOEKZ is from EKPO but there is no data coming from the extractor. LOEKZ is passed to ROCANCEL internally and if ROCANCEL is mapped in the DSO, those items are deleted as LOEKZ is deletion indicator. If you need LOEKZ value, fetch it in the enhancement in a zfield.



SAP Notes Link
https://help.sap.com/saphelp_nw70/helpdata/en/f3/487053bbe77c1ee10000000a174cb4/frameset.htm




Monday, July 20, 2015

Infopackages


These are used to fetch data from ECC to BW. There are different update modes which can be used when fetching data from the source system.


Above is the picture of the various tabs of an infopackage.

  • The data selection tab is the place where we can add filters when extracting data from the source. The filters needed to be maintained in the source system to be seen here.
  • The extraction tab gives the details of type of the datasource (*needs more appropriate explanation)
  • Processing tab gives you the detail till what level the data will be processed, like till psa, data targets only.
  • Data Targets gives the detail of all the objects for which data needs to be loaded.
  • Update tab is the most important tab as it has details on how the datasource should work. More details are given below.
  • Schedule is for triggering the data extraction
Update Tab
This is the important tab in the infopackage. here, we mention the type of update.


Full Update

When you run the using Full Upload, what ever data is there in source everything will be pulled to BW. If you use this, you can't run the Delta as there is no pointer set to find delta records. Only after initializing the datasource, delta records can be fetched.
A full update requests all data that corresponds to the selection criteria you determined in the scheduler. In a certain period of time specified in the scheduler, 100,000 data records are accumulated. With a full update, all 100 000 data records are requested.
When running the full loads, if we have more down time, we can fill setup tables and do a ful l load, else, we can do an Intialization without data transfer which doesn’t fetch any data but it places a pointer for delta and the next time the data is extracted, it fetches the full load records and delta records.

Initialize Delta Process
Only after doing the initialization with any of the below options, the datasource is made delta capable and can be seen in RSA7. The option of Delta is Infopackage is also available only after this process.

  • Initialization with Data Transfer
    • When executed, this will create an entry in the RSA7, which means the pointer is set for delta records. It also fetches all records from ECC and after its completed , it places the pointer. It is Full Load + Initialization.
  • Initialization without Data Transfer
    • When executed, this will create an entry in the RSA7, which means the pointer is set for delta records. It fetches a single record to BW which is a header record.
  • Early Delta Initialization
    • With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the application during the initialization request in the source system. This means that you are able to execute the initialization of the delta process (the init request), without having to stop the updating of data in the source system.
    • You can only execute an early delta initialization if the DataSource extractor called in the source system with this data request supports this.

Delta
A delta update only requests data which has appeared since the last delta. Before you can request a delta update, you must first initialize the delta process.  A delta update is only possible for loading from SAP source systems.  If a delta fails (status ’red’ in the monitor) or the overall    status of the delta request has been set to red manually, the next data request will be performed in repeat mode.

  • F : Flat file provides the delta
  • E : Extractor determines the delta, Ex: LIS, COPA
  • D: Application determines the delta, Ex: LO, FI-AR/AP
  • A: Use ALE change log delta


Full Repair  Request

This is present in the ‘Scheduler’ menu -> Repair Full Request, on the top when you open the infopackage.  These are used to fill in the gaps where delta extractions did not extract all delta records , when delta extraction process failed and cannot be extracted/restarted.
The request gets data from setup tables without delta being disturbed, wherein the setup tables are filled only according to the criteria.
These are also useful when you are initially extracting large volumes of data, whereby you execute an Init w/o data transfer and execute multiple Infopackage (Only Full Update) that are full repair request with specific selection criteria.

Finding the Infopackage Job

Once the infopackage is triggered for running, if you want to see the job details,
Environment -> Process Overview -> In the Source System

Else,
Copy the request number , go to the source system SM37 Add 'BI<request number>' and user as '*'

Data transfer using Infopackages

Data transfer using infopackes is done using IDocs and TRfc. Check LINK for more info
Infopackages can fail due to a number of reason.
Some of the common issues can be found : LINK


Thursday, July 16, 2015

LO - Adding new fields to the existing data flow

LO Extraction

Activating new datasource in LBWE

  • Go to LBWE and look for the datasource
  • The datasource can be one of the following states



  • Click on Inactive . This would make the datasource active. System might ask for a customizing transport. It gives a pop up with MCEX notification. continue and the datasource will be activated.
    • The above step will activate the datasource in development client. If development and testing are done in different client, use SCC1 tcode to copy the changes between the clients.



Enhancing existing datasources

Adding field in LBWE
Check if the field you are looking for is present in the datasource structure. Go to LBWE and look at maintenance . If you have the field there, we can directly add the field there by moving it from right to left.

Adding field as an enhancement
If the field is not present in the extract structure, we need to add the field to the append strucuture of the datasource.

There is also a way to add the field to enhance the MC structure for the new field, which I never tried...!!

Adding fields in LBWE

Now that we already have the field in LBWE, all we need is to move the field from right side to left
side. We can do that following below steps.

Step 1 
check and Clear the setup tables.

Step 2
Clear delta Q from LBWQ
If there are entries in LBWQ , run the V3 job - this should move entries to RSA7.
You can also delete the queues in LBWQ but you will be missing those entries.

Step 3
Go to LBWE, Click on 'Maintainance'.
click on the fields needed and move them left.Once the field appears in the pool, it can be moved from right side to the left side and they are ready to use.

There will be couple of pop-ups with information, read( if you want) and click okay.  Once the field is added, you can see that the datasource becomes read and inactive.

Now click on the datasource and this will activate the datasource and take you to the maintainance screen. The new fields will be hidden by default. You need to remove the check for 'Hide Fields' and field available only in customer exit.
This step will make the datasource to change from Red to Yellow.
Click on Job control ( Inactive ) and this will make it Active.
The field is successfully added to the datasource and is all active.
Note : If you have separate development and dev.testing clients,
you need to copy the customizing changes to test client. ( It is not test system, this is only if you have test client ).
Use TCode SCC1 in the target system and run it for the customizing transport. check 'Including Request Subtasks' also.
Since customizing transports are at client level, they need to be copied to different clients this way.


Possible Errors
If you get the below error , that means that LBWQ is not cleared.
" Entries for application 13 still exist in the extraction strcuture"
Sol :
Run the V3 job and clear the entries in LBWG


Struct appl 13 cannot be changed due to setup table
Sol :
Delete the setup tables in LBWG.

Running Setup Tables
When the tcode for setup tables are run , you might below error.
"DataSource 2LIS_02_*** contains data still to be transferred"
That means that all the data from ECC queues has not been transferred to BW.
So make sure that LBWQ is empty for the extractor ( most of the cases, the V3 jobs runs regularly every 15/30 minutes, so these queues will be cleared). Next, clear the delta queues in RSA7 by pulling them to BW. Sometimes it you might need to run the infopackages multiple times, if there is activity happening in the source system).
Once LBWQ and RSA7 is empty, you can run the t-code and it will go through fine. 

Adding fields in structure
Step 1
Go to RSA6 -> Click on the datasource and go to the MC* Structure.
If you want to create a new append structure , click on Append Strucutre - New
Else, go to any of the existing append structures and ad your fields there.


Step 2
Open the datasource again in RSA6 and you can see that all fields are available in Hidden. Click in edit and unhide the fields.

Step 3
Next code for extracting these fields should be added in CMOD or BADI implementation whichever the company is using.



Wednesday, July 15, 2015

ECC Tables

ECC Operational Tables

Tables

Description

Tables

Description

CDHDRChange document header ( Any changes to the document will be documented over here )RODELTAMGives Delta properties for different deltas
CDPOSChange document itemsROIDOCPRMSControl parameters for data transfer from the source system - IDOC Configuration
BDCPChange pointer TableROOSSHORTNDataSource Short Name
BDCP2Aggregated Change Pointers (BDCP, BDCPS) - Master data delta changesROOSPRMSCControl Parameter Per DataSource Channel - Delta Initialisations
SEOCOMPOGives the list of all BADI implementations(Classes) and corresponding methodsTRFCQOUTtRFC Queue Description (Outbound Queue) - SMQ1
ROOSGENDLMGeneric Delta Management for DataSources ( Gives details of when the last delta was run)ARFCSSTATEDescription of ARFC Call Status (Send) - SAP Note 378903
TMCEXACTLO Data Extraction: Activate Data Sources/Update ( When activating new LIS datasource check for inconsistencies.)CLBW_SOURCESData Sources for Classification Data
TCDOB/TObjects for change document creation
TRFCQOUT tRFC Queue Description (Outbound Queue)
BWOM_SETTINGSBW CO-OM: Control Data : This table has all FI related control parameters https://blogs.sap.com/2013/04/02/bwomsettings-for-fi-loads-in-sap-bi/
BWOM2_TIMESTBW CO-OM: Timestamp Table for Delta Extraction
BWFIAA_AEDAT_ASFIAA-BW: New and Modified Master Records for Delta Upload
.
.
.
.
.
.
.
.
.


Functional Tables

Tables

Description

Tables

Description

BKPF
BSEG

FAGLFLEXAGeneral Ledger: Actual Line Items
KNA1Customer General DataKNB1Customer Master u2013 Co. Code Data (payment method, reconciliation acct)
KNB4Customer Payment HistoryKNB5Customer Master u2013 Dunning info
KNBKCustomer Master Bank DataKNKACustomer Master Credit Mgmt.
KNKKCustomer Master Credit Control Area Data (credit limits)KNVVSales Area Data (terms, order probability)
KNVICustomer Master Tax IndicatorKNVPCustomer Partner Function key
KNVDOutput typeKNVSCustomer Master Ship Data
KLPACustomer/Vendor LinkMARAMaterial Master: General
MAKTMaterial Master: Short descriptionMARMMM Conversion factors
MVKESales <Sales Org, Distr Ch>MLANMM Sales <Country>
MAEXMM Export LicensesMARCMaterial Plant
MBEWMM ValuationMLGNMM WM Inventory
MLGTWM Inventory typeMVERMM Consumption Plant
DVERMM Consumption MRP AreaMAPRMM Forecast
MARDMM Storage LocationMCH1MM X Plant Batches
MCHAMM BatchesMCHBMM Batch Stock
MARCHMM C Segment : HistoryMARDHMM Storage Location Segment History
MBEWHMaterial Valuation HistoryMCHBHBatch Stocks : History
MKOLHSpecial Stocks from Vendor : HistoryMSCAHSales Order Stock at Vendor: History
MSKAHSales Order Stock: HistoryMSKUHSpecial Stocks at Customer: History
MSLBHSpecial Stocks at Vendor: HistoryMSPRHProject Stock: History
MSSAHTotal Sales Order Stocks: HistoryMSSQHTotal Project Stocks: History
AUFKOrder master dataVBPASales Partner Functions
ADR6E-Mail Addresses (Business Address Services) EmailVBPA2Sales document: Partner (used several times) 
FPLTCPayment cards: Transaction data - SD SalesAEOI    ECH: Object Management Records for Change Master - Change Records - Revisions
AENR    Change MasterNACH    Detailed output data / output Conditions TCode : VV31/32/33
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Wednesday, July 8, 2015

Master Data

Master Data Tables

Basic Notes

Click HERE to get basic information on Master data tables and Architecture


Deleting Master Data

As time goes on some junk/obselete values gets accumulated in the master data tables. The master data which is not being used in any of the infoproviders can be deleted directly from the table with out any issues.


Default : 
If the deletion of master data is selected, system automatically deletes the entries which are no longer used. When this is executed, the entry gets deleted from Ptable(/BI0/PCUSTOMER) of the infoobject , but is still visible in SID table (/BI0/SCUSTOMER) , the following way.

Here , the CHCKFL, DATAFL, INCFL are all blank, because the specific entry is deleted from Ptable, but the SIDs are not deleted.
The reason for not deleting SID value is that, if in future the same customer number arrives as correct data, there is no need for the system spend time generating a new SID value, instead the existing SID value is used.
Howver, if we know the data is incorrect, it can deleted with SIDs.

We can select how we want to delete the master data from below options.

Delete SIDS
When deleting the entries, the system also deletes the corresponding SID values.

Delete Texts
It deletes the texts

Simulation Mode
This simulate the deletion without deleting the actual data in the system

Store Master data Where used list
<need to check>

Search Mode

In the output, it gives information about the data being selected.
When 'O' - if we are deleting a customer, it will give one of many places the specific customer number is being used.
When 'P' - Gives one value with the corresponding each and every infoprovider
When 'E' - One usage
When 'A' - All the places where the customer number is being used.


Program to delete master data
RSDMDD_DELETE_BATCH

The advantage of using this program, is that we can run the deletion of individual entry in the master data using the 'Change Filter' option. Once the program is run, log can be seen in Tcode 'SLG1' ( see below for details).

'Check also NLS' should be selected for running the deletion to happen. ( NLS - Near Line Storage)

 
Running the program with out selecting any option would delete the master data entries which are present in the infoobject but are not being used anywhere with out deleting the SIDs.


Tcode :
Logs after master data deletion : SLG1
Enter below values to run the tcode.
give Object : RSDMD
SubObject : MD_DEL





Usage of SIDs
FM :  RSDDCVER_USAGE_MDATA_BY_SID

Points To Remember

- Making Display Attribute as Navigational Attribute
  • Making an attribute of InfoObject Display to Navigational does not effect other object where it is being used. No other activation are required. Sometimes, the infoobject might not get actived in the first try giving an error : characteristic the attributes sid table(s) could not be filled .
    • Activate the object again and it goes through ( not sure why )

Extended Star Schema

Extended Star Schema

Extended Star Schema has a fact table in the middle surrounded by Dimension tables. The Dimension tables have dimension ids and SID ids. The SID table is the table which connects Dimension table and Actual data (Master Data) Table.
In the below diagram, if we want to know the revenue from a material by a specific customer, Each and every customer number has a SID generated and in the same way, each and every Material has an SID generated.



For each Multi-Dimension Data model, we have A FACT table, which holds all the key figure values. For each Dimension defined, there is a Dimension Table which has Dimension ID and SID ID. For every InfoObject value, there will a SID table generated which connects the InfoObject values to the Dimension values.
When we are defining attributes, there are 3 ways of doing it.
  • We have it as a  characteristic in the attributes
  • Add as an Navigational/Display attribute(more notes later)  where it directly does not reside in the cube but can be called by drilling down
  • As a hierarchy.

*Customer *
 Product *
*Color***
*Sales *
*Fact *
C1
P1
Black
S1
30

Customer, Sales is individual dimensions and Product and Color belong to the same dimension.
When this record comes to BW, There will be SID-tables created for each one of them. Below you can see there are 4 different SID Tables (Highlighted in Yellow) and each of the value has a SID value created (boxed in red)
Once the SID are created, there will be DIM tables. Since Sales and Customer are different Dimensions, there will be separated DIM tables. Product and Color belong to the same Dimension. Hence, there is only one Dim-table.  The values of the DIM tables are filled as shown in the figure below.
        
 Hence the entry is completed written. Now there comes another entry. 
The values are populated as follows.

*Customer *
 Product *
*Color***
*Sales *
*Fact *
C1
P1
Red
S1
40
  
Advantages:

  • The Master Data is kept outside the infocube, which helps in accessing it from other InfoCubes/InfoProviders.
  • Alpha numeric values are converted to SID values (Surrogate IDs) which are numbers , increasing the processing speed.

Direct Update DSO

You can use a DataStore object for direct update as a data target for an analysis process. Has only Active Data table. They can be filled from APDs. Data can be aggregated as well be over-written into DSO and can also be used for direct reporting, though not recommended.
  • SID’s can’t be generated in such DSO.
  • Data records with same key are not aggregated in such DSO.
  • Every record have its own technical key. Since a change log is not generated, we cannot perform a delta update to the Info Providers.
  • Supports parallel load.
Scenario of it is as follows.

Standard DSO

Standard DSO

A standard DataStore object is represented on the database by three transparent tables:
  • Activation queue(New Data)
    • Used to save DataStore object data records that need to be updated, but that have not yet been activated. After activation, this data is deleted if all requests in the activation queue have been activated.
  • Active data:
    • A table containing the active data (A table).
  • Change log:
    • Contains the change history for the delta update from the DataStore object into other data targets, such as DataStore objects or InfoCubes.

Data can be loaded from several source systems at the same time because a queuing mechanism enables a parallel INSERT. The key allows records to be labeled consistently in the activation queue. Reports can be built on Std. DSOs.

Delta works with Image concept here. When the data is being loaded from DSO to Infocube, it is always read from the Change Log table which has the history of delta update entries. 0RECORDMODE helps in maintaining the delta concept when data is being loaded to another Infoprovider.

Settings in Std.DSO

Unique Data Records
Overwriting data records not permitted.The DataStore object can be used for loading unique data records.
With this indicator you determine whether ONLY unique data records are posted in the DataStore object. The performance of activation can be increased considerably in this way.
If the loaded request does contain data that is already in the DataStore object, activation of the request will lead to an error.

Note:

This property of a DataStore object is only supported when SID values are generated during activation (see indicator "generate SID values").

SID Generation upon Activation

When checked(Occurs by default), the SIDs Generation Upon Activation box causes the system to generate an integer number known as a Surrogate ID (SID) for each master data value. These SIDs are stored in separate tables called SID tables.
For each characteristic InfoObject, SAP Netweaver BW checks the existence of an SID value for each value of an InfoObject in the SID table. The system then generates a new value of SID if an existing value is not found. The SID is used internally by SAP Netweaver BW when a query is based on a DSO.

Secondary Index on DSO

A secondary index, simply, is a way to efficiently access records in a database by means of some piece of information other than the usual (primary) key. In other words, secondary index will be necessary if a table is accessed in a way that does not take advantage of the sorting of the primary index for the access.


If u click SE14, and give the table name(active table of DSO), it gives you the performance of the select statement on the Table.

Change Log
For each record being loaded from source to DSO, there will be multiple records in the change log.
If the record being loade is a new record, there will be only entry in CL and the record mode will be 'N'.
record mode = 'N'.
If it is an existing record, there are 2 entries. One is Before image where record mode = 'X', and After Image where record mode is ''.

--------------------------------------
Datasource -----> DSO_firstlevel -----> DSO_secondlevel.
In this flow, delta flows fine.
when doing full loads, FULL loads from datasource to DSO_firstlevel but when first level is loaded to second level, no records are loaded ( may be bcos full load did not have any data changes in them ??? Not sure )

-------------------------------------

Change Log Deletion
Change log has to be deleted frequently, so as to improve the performance of the loading  of the DSO.
3 ways of deleting.
Manual Deletion
Go to Manage screen of the DSO --> Environment --> Delete Change Log Data.

Deletion in process chain
Go to 'Other BW Process' --> Deletion of Requests from the Change Log . Check the last check box, else data will not be deleted.



using SE38 Program

Extra Notes
DSO Job Log and Activation Parameters
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f0d82054-f705-2e10-75b8-fc0f428bc16e?overridelayout=true

Write Optimized DSO

Write Optimized DSO

This can be used a base level DSO where the data can be stored same format as it is extracted from source.  Most of the cases the data is saved as replica of PSA, wherein, even if the PSA is deleted the data is always present in this layer.
  • Records with the same key are not aggregated, but inserted as new record, as every record has new technical key.
  • There are No SIDs generated for this DSO, which decreases the load time.
  • It allows parallel load, which saves time for data loading.
Reporting can be done on this DSO but it is not recommended.

Data that is loaded into write-optimized DataStore objects is available immediately for further processing. There is no activation needed for the DSO, saving time. No SIDs are generated and the DSO consists of only ‘Active Table’.

Delta cannot be created based on entries (image based delta ) as they do not have change log, but a request based delta is possible.

For performance reasons, SID values are not created for the characteristics that are loaded. The data is still available for BEx queries. However, in comparison to standard DataStore objects, you can expect slightly worse performance because the SID values have to be created during reporting.

Semantic Key

They act as key fields in the DSO, similar to the std. DSO key fields. They help is getting unique records in the DSO, as they do not allow duplicates. The regular technical key of the Wdso would be Request GUID field(0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD).

Semantic Key identifies error in incoming records or Duplicate records, thus protects Data Quality such that all subsequent Records with same key are written into error stack along with incorrect Data Records.

My Notes
Semantic keys
they can handle duplicate records..
But if the same record is coming in a next load, it filters out those values tooo...

I think it works when deleting and reloading the data

Datastore Objects


A DataStore object serves as a storage location for consolidated and cleansed transaction data or master data on a document (atomic/detailed) level. The data in DataStore objects is stored in transparent, flat database tables.
The data from a DataStore object can be updated with a delta update into InfoCubes (standard) and/or other DataStore objects or master data tables (attributes or texts) in the same system or across different systems.

There are 3 types of DSOs.

Star Schema

Star Schema                 

Star schema basically has a fact table and Dimension table surrounding it. All the metrics are stored in the fact table and the entities and their attributed are stored in the dimension tables. The dimension tables are connected to fact tables but are not connected to each other.
The main reason for opting this model is easy retrieval of data in queries. Let’s say, there is a requirement which says  ‘Show me the Sales Amount by Sales Department by Material Group by Month’. When using Star Schema,  Sales Amount is a present in the Fact table and the Sales Department , Material, and Month in Dimension table. How we model our dimensions is important when using Star Schema.
Case Study:                                                                                                                                                                              
‘Track Performance of Materials with respect to Customers and Sales Person.


  • Having complete understanding of the underlying business process
    • Identify the Entities and the relations between them. 
      • The entities like, Customer, Material and Sales Person are called ‘Strong Entities’ when designing dimensions, we need to know the relation between them. From general understanding we can say that, one customer can buy any number of materials and similarly, a material can be purchased by any number of customers. They share a n:m relation. Same with the Sales Person and Customer, Sales Person and Materials.
    • Know the how the performance is measured/Facts.
      • For knowing the performance, we need to know how many materials are sold which gives the facts. Facts are normally additive. Sales transactions would give the details. Hence, Sales Transactions become the Intersection Entity which changes the n:m relation between the dimensions to give a 1:n relation.
    • Determine if any additional details are required.
      • This gives additional entities and attributes. For Materal, we have material number, Material Name, type etc  as attributes and we might need to also consider, material group which is again an entity and can have attributes of its own.
    • If additional details are required, we need to know the relation between entities and relation between entities and their attributes.
  • Creating a valid data model.
    • The organization of the entities and their attributes that are arranged in a parent-child relationship(1:n) into groups need to done
These groups are called dimensions and their properties are called attributes. The strong entities define the dimensions and the attributes of the dimension represent a specific business view on the facts which are derived from the intersection entities.

The attributes of the dimension are then organized in a hierarchical way and the most atomic attribute forms the leaves of the hierarchy tree defines the granularity of the dimension. This is MDM (Multidimensional Model) where the facts are based in the center with the dimensions surrounding them, is a simple but effective concept.



Fact Table:  A central intersection entity defines a Fact Table. An intersection entity such as document number is normally described by facts (sales amount, quantity), which form the non-key columns of the fact table. In fact, M:N relationships between strong entities meet each other in the fact table, thus defining the cut between dimensions.

Dimensions (Tables): Attributes with 1:N conditional relationships should be stored in the same dimension such as material group and material.The foreign - primary key relations define the dimensions

Time Dimension: One exception is the time dimension. As there is no correspondence in the ERM, time attributes (day, week,year) have to be introduced in the MDM process to cover the analysis needs.

Dimension tables should have a 'relatively' small number of rows (in comparison to the fact table; factor at least 1:10 to 1:20).

Disadvantages

  • In star schema, the master data is present in the dimension table, which increase the size and also,it is accessible only for that infocube. Resuability is reduced.
  • Handling Languages ( For each language , there is a new record)
  • Handling Time Dependency – increases the volume of the table
  • Star schema used alpha numeric values for keys which increase the time when compared to using numeric keys.
  • There can only be 16 dimensions  for conducting analysis(*)

Data Modelling

Data Modeling

OLAP(On Line Analytical Processing)  is one of the major requirements of data warehousing. In short, OLAP offers business analysts the capability to analyze business process data (KPIs) in terms of the business lines involved. Normally this is done in stages, starting with business terms showing the KPIs on an aggregate level, and proceeding to business terms on a more detailed level. Architected data marts (InfoCubes) represent a function, department or business area. They give an aggregated and integrated view. To present information to the business analyst in a way that corresponds to his normal understanding of his business i.e. to show the KPIs, key figures or facts from the different perspectives that influence them (sales organization, product/ material or time).In other words, to deliver structured information that the business analyst can easily navigate by using any possible combination of business terms to illustrate the behavior of the KPIs
Scenario:                                                                                                                                                                                          
A multi-step multi-dimensional analysis will look like this:

  • Show me the Sales Amount by Sales Department by Material Group by Month
  • Show me the Sales Amount for a specific Sales Department „X‟ by Material by Month

A DataStore object may serve to report on a single record (event) level such as:

  • Show yesterday‟s Sales Orders for Sales Person „Y‟.

This does not mean that sales order level data cannot reside in an InfoCube but rather that this is dependent upon particular information needs and navigation.

  • Analytical processing (OLAP) is normally done using InfoCubes.
  • DataStore objects should not be misused for multi-dimensional analysis.