Monday, December 21, 2015

2LIS_03_BF/BX

Inventory



Deleting Setup Tables : LBWG(03)
Filling Setup Tables    : OLI1BW
Setup Table Job           : RMCBNEUA

Tables
MKPF
MSEG

TCodes
MB01/02/03



SAP Notes
492828 - Determining the transaction key for 2LIS_03_BF + 2LIS_03_UM


Wednesday, November 11, 2015

2LIS_06_INV

Filling Setup Tables : OLI6BW
Delete Setup Tables : LBWG - Application 06
Setup Table MC06M_0ITMSETUP

When running setup tables, the selection screen parameters are mandatory, even  the screen doesn’t say so.
With out the selections the job would give the error

Enter Doc number, Year and Comp Code and run the setup tables. It ran fine few times, but twice I had duplicate error. So I deleted the setup tables (LBWG) and ran them again with only Doc Number and Year.

My Notes:
This datasource is used to get the invoiced data. The requirement is to replicate 'MB5S' tcode from ECC. Users wanted the data from EKBE table which shows Invoiced Quantities BEWTP = Q and 'E' delivered quantities.

The entries from EKBE with BEWTP = 'Q' can be extracted using this datasource, given that we extract records where EBELN, MENGE,MATNR are not inital (my assumption and it worked fine).

The BEWTP = 'E' is extracted from 2LIS_03_BF.

Issues:

  • Difference in amounts of MSEG-DMBTR and EKBE-DMBTR
    • EKBE table has Purchase Order History data and MSEG has Document Segment: Material data in it. When using a Join of 2LIS_03_BF ( for Good Movement (E) from MSEG data ) and 2LIS_06_INV ( for Invoice (Q) from EKBE data ) ; In general the DBMTR is same in MSEG and EKBE, but there are few scenarios when they are different. Below is the explanation.
    • http://scn.sap.com/thread/1895345
    • For the below example the $0 value in MSEG is due to the fact that the std price for that material in table MBEW is $0 - this is the value that would post to inventory.  The value that appears in EKBE for where value = "E" is the amount the vendor is invoicing us.  So it appears that EKBE is using the invoiced amount as the value for DMBTR.





SAP Link

APD

APD - Analysis Process Designer

TCode: RSANWB

Debugging APDs
https://blogs.sap.com/2014/03/04/steps-to-debug-apd-routine-code/

Duplicates in APD

  • Go the step and do an intermediate result.
    • This will generate a temporary table with results from that stage.
    • The table is be visible in SE11/SE16 as regular table
    • See if you can find the duplicates from the set
  • Delete the intermediate result table again.



Some known issues
  • APD connecting lines are missing
    • If the connecting lines between the APDs are missing.
      • This issue happened after SAP 7.4 GUI update/
      • Check Note 2227956

  • APD taking long time to run
    • Uncheck 'Process Data in Memory' to improve the APD performance. It is checked by default.
      • GoTo -> Performance Settings

Differences

Difference between REFRESH; FREE; CLEAR
REFRESH : If REFRESH is applied on an internal table  then the entries in the table will be deleted but the memory allocated to that table still exists.

FREE : If FREE is applied on an internal table then the memory allocated to that table will not exist and that memory can be used by some other variable/object.In this case both the entries and the memory do not exist anymore.

CLEAR : pplied on If CLEAR is generally used for workareas and also, if the internal table is defined with header line, clear can be applied to clear the header line. The memory allocated to it is not removed.

Difference between CHECK and IF
if the IF condition is not satisfied control will go to the next statements. Here when the IF condition is used, and the condition is not satisfied, control looks for remaining executable lines inside the loop.

in case of CHECK if the condition is not satisfied control will; not go further.Here when the CHECK condition is used, and the condition is not satisfied, control comes outside the loop.

Unwanted Characters

Method 1
*** Removing unwanted characters.
DATA: var_str(100) VALUE
‘QWERTYUIOPASDFGHJKLZXCVBNMqwertyuiopasdfghjklzxcvbnm1234567890-_’  ,
var_custpo TYPE /BIC/OIZCCUSTPO,
var_temp TYPE /BIC/OIZCCUSTPO,
var_len(3) TYPE c.
CLEAR: var_custpo,
var_len,
var_temp.
var_custpo = source_fields-/bic/zccustpo.
var_len = strlen( var_custpo ).
DO var_len TIMES.
SUBTRACT 1 FROM var_len.
IF var_str CS var_custpo+var_len(1).
CONCATENATE var_custpo+var_len(1) var_temp INTO var_temp.
ENDIF.
ENDDO.
RESULT = var_temp.

Method 2

CONSTANTS: c_allowed(100) TYPE C value
‘ #{}[]!”%&”()*+,-./:;<=>?_0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ abcd’ &
‘efghi’ &
‘jklmnopqrstuvwxyz ‘.
DATA : var_EANUPC TYPE /BI0/OIEANUPC,
var_len TYPE i,
var_error TYPE sy-subrc,
var_offset TYPE sy-index.
CLEAR: var_EANUPC.
var_EANUPC = SOURCE_FIELDS-EANUPC.
TRANSLATE var_EANUPC TO UPPER CASE.
CLEAR: var_error.
CALL FUNCTION ‘RSKC_CHAVL_CHECK’
EXPORTING
I_CHAVL                     = var_EANUPC
IMPORTING
E_ERR_SUBRC                 = var_error.
IF var_error NE 0.
CLEAR: var_len.
var_len = strlen( var_EANUPC ).
DO var_len TIMES.
CLEAR:var_offset.
var_offset = sy-index – 1.
IF var_EANUPC+var_offset(1) CO c_allowed.
ELSE.
var_EANUPC+var_offset(1) = ”.
ENDIF.
ENDDO.
ENDIF.
* result value of the routine
RESULT = var_EANUPC.

Reading Dynamic Table

Dynamic table fields declaration with Reference
Using For the parameters we use the dynamical reference to the data dictionary. This way we do not have to define texts  in the text pool, but the field name from the dictionary is used. Character fields to hold the dictionary names data:

* Logon data
t_gltgv(30) value 'BAPILOGOND-GLTGV',
t_gltgb(30) value 'BAPILOGOND-GLTGB',
t_ustyp(30) value 'BAPILOGOND-USTYP',

gltgv like  (t_gltgv),
gltgb like  (t_gltgb),
ustyp like  (t_ustyp),
class like  (t_class),
accnt like  (t_accnt),

Reference : IAM_USERCHANGE

-------------------------------------------------------------------------------------------------------------------------

REPORT z_dynamic_read.

DATA: gt_itab TYPE REF TO data,
      ge_key_field1 TYPE char30,
      ge_key_field2 TYPE char30,
      ge_key_field3 TYPE char30,
      go_descr_ref TYPE REF TO cl_abap_tabledescr.

FIELD-SYMBOLS: <gt_itab> TYPE STANDARD TABLE,
               <gs_key_comp> TYPE abap_keydescr,
               <gs_result> TYPE ANY.

PARAMETERS: pa_table TYPE tabname16 DEFAULT 'SPFLI',
            pa_cond1 TYPE string DEFAULT sy-mandt,
            pa_cond2 TYPE string DEFAULT 'LH',
            pa_cond3 TYPE string DEFAULT '0123'.

START-OF-SELECTION.

* Create and populate internal table
  CREATE DATA gt_itab TYPE STANDARD TABLE OF (pa_table).
  ASSIGN gt_itab->* TO <gt_itab>.
  SELECT * FROM (pa_table) INTO TABLE <gt_itab>.

* Get the key components into the fields ge_key_field1, ...
  go_descr_ref ?= cl_abap_typedescr=>describe_by_data_ref( gt_itab ).
  LOOP AT go_descr_ref->key ASSIGNING <gs_key_comp>.
    CASE sy-tabix.
      WHEN 1.
        ge_key_field1 = <gs_key_comp>-name.
      WHEN 2.
        ge_key_field2 = <gs_key_comp>-name.
      WHEN 3.
        ge_key_field3 = <gs_key_comp>-name.
    ENDCASE.
  ENDLOOP.

* Finally, perform the search
  READ TABLE <gt_itab> ASSIGNING <gs_result>
          WITH KEY (ge_key_field1) = pa_cond1
                   (ge_key_field2) = pa_cond2
                   (ge_key_field3) = pa_cond3.
  IF sy-subrc = 0.
    WRITE / 'Record found.'.
  ELSE.
    WRITE / 'No record found.'.
  ENDIF.
One note: When an internal table is created dynamically like in my program above, the table key isnot the key defined in the DDIC structure — instead, the default key for the standard table is used (i.e. all non-numeric fields).


Japanese Currency Issue

In SAP, amounts (data type CURR) are always stored with two decimals in the database.
It does not matter how many decimals are actually allowed for that currency.
Currencies like JPY, KRW, CLP, etc. do not have any decimals.

In those cases, SAP both on ECC and BW divides the amount for those currencies by the decimal factor maintained in TCURX table and multiplies by the same factor on the report.

For example, 30,000 will be stored as 30,000/100 (because the TCURX table has 0 decimal).

For zero decimals, 
SAP divides by 100. If we are loading the flat file data , we should make sure that this division occurs. After the division, 30,000 will be stored as 300.
In the BW report, 300 will be multiplied by 100 and shown as 30,000.
This division and multiplication occurs for all the exception currencies maintained in TCURX.

If the above division is not happening, the report will multiply by 100 by default and that will be an overstated amount by a factor of 100.

There are specific settings for flat file as stated in OSS Note 1176399 for special currency loads to work correctly. 





Wednesday, September 23, 2015

Excel Tricks

=VLOOKUP(A:A,B:B,1,FALSE)

So , I wanted to look for the values in colomn A, if they are present in B.So the '1' in the formula represents what to print in the result. Since there is only one colomn, here it would display 'B' values. If more than one colomn is selected like =VLOOKUP(A:A,B:D,2,TRUE), in this case it will display values in second colomn 'C' and TRUE/FALSE, where FALSE would an exact match. '0' can also be used instead of FALSE.

Remove Prefix from column
=RIGHT(<field>,length)
Eg : if A1 has value ABC_12345 --> =RIGHT(A1,5) will give you 12345

Comparing values in two fields
=(EXACT(A2,B2))
Or Go to Formulas tab in excel -> Text and select Exact

Increment Values when Number Changes
=IF(A2<>A1,1,B1+1)
Click LINK

Tuesday, September 8, 2015

0FI_AR_4

0FI_AR_4

0FI_AR_4 - Accounts Recievable : Line Items - Customers: Items

Tables : 
BSID ( Open Items )
BSAD ( Cleared Items )


Related T-codes :
FBL5N : This gives the data for Customer Line Item

Delta Functionality
It is defined in FM 'BWFIR_READ_DATA'.






0FI_AP_4

Wednesday, August 19, 2015

BW Questions


Difference between Master data Activation and Attribute Change Run
    • Master Data Activation = Activation
    • Attribute Change Run = (Master Data Activation) + Extra features
  • Master Data Activation
    • When the data is loaded to the InfoObject, it is present in ‘M’ Version, which is modified version. Only after the infoObject is activated, it changes to Active (A) version and is available for reporting.
  • Attribute Change Run
    • It does the Master data activation, along with the following tasks
    • It realigns the Aggregates for the changed attributes. 
    • It is also used when Navigational Attributes are used in the Aggregates. Once the request is loaded to cube and Roll up has been done. So after doing Rollup, if the value of the Master Data changes, that will not be reflected in the aggregates. As, when a query is executed, it gets the data from the Aggregates. In our case, the master data value changed after rollup, so the value is different in Aggregate and Cube. So, if we do Attribute Change Run, whenever there is a change in the master data value, it informs the change of master data value.

Classification Datasources

Classification is the process of assigning objects to classes and characteristic values to these objects. It is the means by which an object is classified based on its properties.

Object - a predefined business entity such as vendor, material, equipment or customer.
Class - a grouping of characteristics. For each object , there can be more than one class.
Characteristic - a property of an object, this is the new field being added. For example a new characteristic might be the color for a material.
Characteristic value - the actual value of the characteristic, for example a material might be colored blue.

Points-To-Remember

  • Classification in ecc allows multiple values which is not supported in BW. If a characterstic is defined with mutliple values, we cannot extract that characterstic to BW. Eg: color is the characterstic of material, and it can have only value of blue. if it has blue and blank, it will not be extracted to BW. In that case, there should be 2 characterstics defined in ecc as color1 and color2 and each can have blue and black values.
  • When adding characterstics to datasources in ctbw, they can be added irrespective of their classes.
  • When the characteristics(attributes) are being added to the datasource in ECC in CTBW, there will be corresponding characteric/attribute datasources created for all attributes which are of type char. They can/will be transferred to BW. They are all text datasources.


Classification Tcodes
CL03     - Displays Classes
CT04     - Displays Chatacteristics
CTBW   - Datasource Generation

Function Module
BAPI_CLASS_SELECT_OBJECTS
This FM needs Class type and Class Name (CLASSNUM) in selection.
The output SELECTEDOBJECTS gives a list of all places where this specific class is used.
For example, class type :111 and class name : local_cust for customer , when executed , selectedobjects gives all the customers which have classifications maintained for local_cust.

Classification Tables
AUSP     - Characteristic Values - Has char. ID (ATINN) and values of objects too ( For eg: OBJECK                   has  material numbers with corresponding char. ids - This table connects datasource                             objects Material, Equipment numbers etc to char ids.)
CABN    - List of Characteristics - Characteristic ID and Characterstic Name
CAWN   - Characteristic values - Characteristic ID  and Values
CAWNT -  Value Texts               - Characteristic ID and Value Texts

Understanding the tables
When a class is created, it will have number of characterstics in it. Each characterstic is given an Internal Characteristic number ( CAWN - ATINN ) , name (CABN-ATNAM) and the number of values for which can be assigned to the characteristic are maintained with help of counter ( CAWN - ATZHL ).

Corresponding text of each internal characteristic / counter is present in table CAWNT.

Example,
For Object MATERIAL - There are 2 classes maintained. MATERAIL_PHY, MATERIAL_ORG.
For Class MATERIAL_PHY - It has 3 characterstics. (1) Color (2) Base (3) Shape.

So each characteristic here is given an internal characteristic number and name.
10000001 - Color , 10000002 - Base , 10000003 - Shape

Color (ATNAM) can have 5 possible values (ATZHL) - Blue ; Green ; Red ; Yellow ; Orange (ATWRT) . Base can have 2 values - Fiber , Glass . Shape can be 3 values - Square , Rectangle , Triangle.

So the tables entries will be as follows.

CAWN
ATINN - ATZHL - ATWRT
10000001 - 1 - BL
10000001 - 2 - GR
10000001 - 3 - RE
10000001 - 4 - YE
10000001 - 5 - OR
10000002 - 1 - FI
10000002 - 2 - GL
10000003 - 1 - SQ
10000003 - 2 - RC
10000003 - 3 - TR

and Texts are maintianed in CAWNT

ATINN - ATZHL - ATWTB
10000001 - 1 - Blue
10000001 - 2 - Green
10000001 - 3 - Red
10000001 - 4 - Yellow
10000001 - 5 - Orange
10000002 - 1 - Fiber
10000002 - 2 - Glass
10000003 - 1 - Square
10000003 - 2 - Rectangle
10000003 - 3 - Triangle


Creating / Enhancing a Classification Datasource
ECC
Step 1
Check the characteristics present for the class in CL03

Step 2
Goto CTBW and you can see the below screenshot if there are datasources already defined. If not, define the datasource here first.


Selecting the datasource and clicking on the characterstics on the left side will give you existing characterstics associated with the datasource.
Check in CL03 for the chars. to be maintained and add them over here.



In edit mode, we can 'New Entries' where we can add the characters to the datasource.
One the characterstics are added, it will be in 'N' mode, click on 'datasource highlighted, and the datasource will be regenerated and the fields change to 'R'.

The datasource will be available in RSA6 and data can be checked in RSA3.

Note
  • In some cases, the editing should be done in development client and the characters might not be defined in the client with F4 help. so just copy the characteristic name from test client and add them
  • Individual characteristic datasources will be created for all the characteristics (not for time / key figures) and are  added to the datasource
  • Characteristics defined with multiple values cannot be added to the datasource. [ Go to individual characteristic in CT04 -> Basic Data -> Value Assignment should be 'Single Value ].
  • Sometimes, when you enter a new characteristic in CTBW, and click enters - nothing happens. That is fine. If you go back to the datasource and check you can see the newly added chars. Generate the datasource and everything should be fine.
  • CONVERSION_EXIT_ATINN_INPUT

    CONVERSION_EXIT_ATINN_OUTPUT


BW
Replicate the datasource in BW like another datasource replication. There is a different way to replicate classification datasources, where in all the necessary infoobjects will get generated, but I prefer creating my own infoobjects.



Data
The Master data objects have 'Classification' tabs/links defined in their tcodes where data can be seen,
For eg : Characterstics of Equipment can be seen in XD03 -> Equipment -> Extras -> Classification.



Some common issues
- Data was extracted from incorrect client

The infopackage fails with the above error. This can happen when clients are changed.
Go to CTBW in the source system and check the client number against the datasource.
Compare it with client in the system connected to BW system. They should be the same


Reference Notes
http://scn.sap.com/docs/DOC-62180
https://wiki.scn.sap.com/wiki/display/SAPMDM/Issues+in+fetching+internal+characteristics+of+materials

Saturday, August 8, 2015

Infosource

Infosoures 
A non-persistent structure consisting of InfoObjects for joining two transformations.

You always use an InfoSource when you want to perform two (or more) transformations consecutively in the data flow - without additional storage of the data.

My Notes 
Having read articles on Infosource, I am not able to completely understand the use. That might be because I didnot work on bw 3.5 where Infosources are mandatory and with out them, we cannot connect flows. Having worked from BW 7.0, I dont find them useful, as every where they say, we can use it to connect different datasources to a DSO, I would use a direct transformation.

Useful Links:


Monday, August 3, 2015

Code Snippets / Syntaxes





Getting Source System

Enabling/Disabling 3.X content

Declaring and using range table
DATA: lt_version TYPE RANGE OF rsobjvers,
      lw_version LIKE LINE OF lt_version,
lw_version-sign = 'I'.
lw_version-option = 'EQ'.
lw_version-low = 'A'.
APPEND lw_version TO lt_version.

Negative Number to String
String '-2.0481607392E8' to Number, Use DECFLOAT16/34.
DATA: lv_string TYPE string,
      lv_number TYPE DECFLOAT16.

lv_string = '-2.0481607392E8'.
lv_number = lv_string.

Dynamic Declaration of internal tables
FIELD-SYMBOLS : <fs_field> TYPE any.
DATA: g_field TYPE REF TO data.
DATA : g_name TYPE string.

g_name = 'RSBKDTP-DTP'.

CREATE DATA g_field TYPE (g_name).

ASSIGN g_field->* TO <fs_field>.


Monday, July 27, 2015

Compounding

compounding attribute are nothing jst a combinaiton of two char to make a unique one i mean none of them is existant without the other or in other words there is no meaning if the are separated

Typically in a organization the employee id are allocated in serial like say 10001, 10002 and so on. Lets your Organization comes out with a new employee id scheme where the employee id for each location would start with 101. So the employee id starting for india would be india/101 and for US would be US/101. Now note that the employee india/101 and US/101 are different. Now if someone has to contact employee 101 he needs to know the location without which he cannot uniquely identify the employee. Hence in this case location is the compunding attribute.

A maximum of 13 characteristics can be compounded for an InfoObject. the characteristic values can also have a maximum of 60 characters. This includes the concatenated value, meaning the total length of the characteristic in compounding plus the length of the characteristic itself.



Friday, July 24, 2015

Data Loading

Lo related datasources.


  • New Datasource
    • Checking : Check the existing setup table to see if data is present in the setup tables. Go to SE16 and give the setup table name
      • MC<Application Number>M_0<datasource>SETUP
        • MC02M_0ITMSETUP
    • Deleting : If data is present in the setup tables, delete them and re-load them again. Remember  that set up tables are filled for the whole application and not for each datasource.
      • Set up table data will be deleted for whole 'Purchasing' if deleting for '02' even though you want data reload for only Purchasing Item datasource 2lis_02_itm.
        • T-Code : LBWG 
        • Or SBIW -> Settings for Application-Specific Datasources(PI) -> Logistics  -> Managing Extract Structures -> Initialization -> Delete the contents of the setup tables  
    • Filling : Go to corresponding t-code for each application (unlike deleting, filling setup table t-code is different for each application or Goto SBIW ->  SBIW -> Settings for Application-Specific Datasources(PI) -> Logistics  -> Managing Extract Structures -> Initialization -> Filling in setup tables -> Application Specific setup of statistical data -> (choose your application)
      • Depending on the screen you get over there, selections can be given and data can be filled to the setup tables or you can run it wide open.
      • Remember to run in background, so that its easy to monitor and know the status. The job will be scheduled under job name 'RMCENEUA'.
      • Once the job is completed, the data is available to load to BW Via 'Full Update'
  • Existing datasource
    • Once the new field is added to the datasource, there are two ways to go.
      • Historic data is not required for the new field
        • So we need not do anything. Just add the field and the code if needed any in the exit and  let the regular process chains get the delta and fill the data
      • Historic data needed
        • For this we need to do a full load. So we need to follow all the steps as for a new datasource and fill the history.
        • Looks for downtime ( when the delta's are not running) Fill the setup tables and do a full load to BW and to the infoprovider. It will take care of the existing data and also writing data to the new fields.
        • Delta need/will NOT be disturbed. No Initialization is required. 

Thursday, July 23, 2015

BI Content

Steps to follow while installing BI content
-----------------------------------------------------
Select the objects which are needed and drag them to the right side.

In grouping , make sure you are doing only necessary objects.

If upwards and downwards data flow is needed  (transformations and dtps etc), identify the object from the list , go back and search for the object in the left side - drag it individually and install it.

Go for Tree View instead of Hierarchy View ( which is the default view ) , to see a better view of the objects.



Remember, clicking on the parent, will select all the objects below it, even though they are not selected. you need to take care and select individual objects to install them.


Right click and select 'Donot Install any below' to make sure.


Better option would be to get the technical names of the individual transformations or dtps, whaterver needed from the flow and then install each object individually searching for them in the contect. This way, you are installing only necessary objects.


Datasources

Replication :
Goto RSDS -> Datasource and Source System -> Replicate

Wednesday, July 22, 2015

Purchasing

Purchasing


Link for information in help.sap.com
Information from SAP content

Datasources


Setup Table Structure                : MC02M_0(HDR)SETUP
Deleting Setup tables                 :  LBWG - Application : 02
Setup Table deletion job name   :  RMCSBWSETUPDELETE
Setup Table Filling Job Name      :  RMCENEUA
Filling setup tables                     : OLI3BW

Points to Remember

  • Purchasing Extractors  do not bring records which are incomplete. i.e. EKKO-MEMORY = 'X' - Incomplete.
  • PO Item marked for deletion in ECC but the same is not reflecting in BW via delta/full loads. Click HERE


Tables
EKKO
EKPO
EKBE
EKKN


Transactions
ME23n
ME2L ( Account Assignment )


2LIS_02_ITM

SYDAT is Item created on - EKPO-AEDAT.
In LBWE, it says EKKO-SYDAT but it is from EKPO (*)

2LIS_02_ACC

Base Tables 
EKKO - Header
EKPO - Item
EKKN - Account Assignment in Purchasing Document

2LIS_02_ACC- NETWR
Even though the extract structure shows EKKN as the base table for this field, it is actually  calculated using a formula ( Check SAP notes Link for more )

For documents with specified purchase order quantity, such as purchase order documents, the net purchase order value is calculated as follows:

NETWR = EKPO-NETWR / EKPO-MENGE * EKKN-MENGENet purchase order value = Net purchase order value of item / Item quantity * Quantity posted to account


For documents with target purchase order quantity, such as contracts, the net purchase order value is calculated as follows:
NETWR = EKPO-ZWERT / EKPO-KTMNG * EKKN-MENGENet purchase order value = Target value of item / Target quantity of item * Quantity posted to account

2LIS_02_ACC-LOEKZ
In the extractor structure value of LOEKZ is from EKPO but there is no data coming from the extractor. LOEKZ is passed to ROCANCEL internally and if ROCANCEL is mapped in the DSO, those items are deleted as LOEKZ is deletion indicator. If you need LOEKZ value, fetch it in the enhancement in a zfield.



SAP Notes Link
https://help.sap.com/saphelp_nw70/helpdata/en/f3/487053bbe77c1ee10000000a174cb4/frameset.htm




Monday, July 20, 2015

Infopackages


These are used to fetch data from ECC to BW. There are different update modes which can be used when fetching data from the source system.


Above is the picture of the various tabs of an infopackage.

  • The data selection tab is the place where we can add filters when extracting data from the source. The filters needed to be maintained in the source system to be seen here.
  • The extraction tab gives the details of type of the datasource (*needs more appropriate explanation)
  • Processing tab gives you the detail till what level the data will be processed, like till psa, data targets only.
  • Data Targets gives the detail of all the objects for which data needs to be loaded.
  • Update tab is the most important tab as it has details on how the datasource should work. More details are given below.
  • Schedule is for triggering the data extraction
Update Tab
This is the important tab in the infopackage. here, we mention the type of update.


Full Update

When you run the using Full Upload, what ever data is there in source everything will be pulled to BW. If you use this, you can't run the Delta as there is no pointer set to find delta records. Only after initializing the datasource, delta records can be fetched.
A full update requests all data that corresponds to the selection criteria you determined in the scheduler. In a certain period of time specified in the scheduler, 100,000 data records are accumulated. With a full update, all 100 000 data records are requested.
When running the full loads, if we have more down time, we can fill setup tables and do a ful l load, else, we can do an Intialization without data transfer which doesn’t fetch any data but it places a pointer for delta and the next time the data is extracted, it fetches the full load records and delta records.

Initialize Delta Process
Only after doing the initialization with any of the below options, the datasource is made delta capable and can be seen in RSA7. The option of Delta is Infopackage is also available only after this process.

  • Initialization with Data Transfer
    • When executed, this will create an entry in the RSA7, which means the pointer is set for delta records. It also fetches all records from ECC and after its completed , it places the pointer. It is Full Load + Initialization.
  • Initialization without Data Transfer
    • When executed, this will create an entry in the RSA7, which means the pointer is set for delta records. It fetches a single record to BW which is a header record.
  • Early Delta Initialization
    • With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the application during the initialization request in the source system. This means that you are able to execute the initialization of the delta process (the init request), without having to stop the updating of data in the source system.
    • You can only execute an early delta initialization if the DataSource extractor called in the source system with this data request supports this.

Delta
A delta update only requests data which has appeared since the last delta. Before you can request a delta update, you must first initialize the delta process.  A delta update is only possible for loading from SAP source systems.  If a delta fails (status ’red’ in the monitor) or the overall    status of the delta request has been set to red manually, the next data request will be performed in repeat mode.

  • F : Flat file provides the delta
  • E : Extractor determines the delta, Ex: LIS, COPA
  • D: Application determines the delta, Ex: LO, FI-AR/AP
  • A: Use ALE change log delta


Full Repair  Request

This is present in the ‘Scheduler’ menu -> Repair Full Request, on the top when you open the infopackage.  These are used to fill in the gaps where delta extractions did not extract all delta records , when delta extraction process failed and cannot be extracted/restarted.
The request gets data from setup tables without delta being disturbed, wherein the setup tables are filled only according to the criteria.
These are also useful when you are initially extracting large volumes of data, whereby you execute an Init w/o data transfer and execute multiple Infopackage (Only Full Update) that are full repair request with specific selection criteria.

Finding the Infopackage Job

Once the infopackage is triggered for running, if you want to see the job details,
Environment -> Process Overview -> In the Source System

Else,
Copy the request number , go to the source system SM37 Add 'BI<request number>' and user as '*'

Data transfer using Infopackages

Data transfer using infopackes is done using IDocs and TRfc. Check LINK for more info
Infopackages can fail due to a number of reason.
Some of the common issues can be found : LINK


Thursday, July 16, 2015

LO - Adding new fields to the existing data flow

LO Extraction

Activating new datasource in LBWE

  • Go to LBWE and look for the datasource
  • The datasource can be one of the following states



  • Click on Inactive . This would make the datasource active. System might ask for a customizing transport. It gives a pop up with MCEX notification. continue and the datasource will be activated.
    • The above step will activate the datasource in development client. If development and testing are done in different client, use SCC1 tcode to copy the changes between the clients.



Enhancing existing datasources

Adding field in LBWE
Check if the field you are looking for is present in the datasource structure. Go to LBWE and look at maintenance . If you have the field there, we can directly add the field there by moving it from right to left.

Adding field as an enhancement
If the field is not present in the extract structure, we need to add the field to the append strucuture of the datasource.

There is also a way to add the field to enhance the MC structure for the new field, which I never tried...!!

Adding fields in LBWE

Now that we already have the field in LBWE, all we need is to move the field from right side to left
side. We can do that following below steps.

Step 1 
check and Clear the setup tables.

Step 2
Clear delta Q from LBWQ
If there are entries in LBWQ , run the V3 job - this should move entries to RSA7.
You can also delete the queues in LBWQ but you will be missing those entries.

Step 3
Go to LBWE, Click on 'Maintainance'.
click on the fields needed and move them left.Once the field appears in the pool, it can be moved from right side to the left side and they are ready to use.

There will be couple of pop-ups with information, read( if you want) and click okay.  Once the field is added, you can see that the datasource becomes read and inactive.

Now click on the datasource and this will activate the datasource and take you to the maintainance screen. The new fields will be hidden by default. You need to remove the check for 'Hide Fields' and field available only in customer exit.
This step will make the datasource to change from Red to Yellow.
Click on Job control ( Inactive ) and this will make it Active.
The field is successfully added to the datasource and is all active.
Note : If you have separate development and dev.testing clients,
you need to copy the customizing changes to test client. ( It is not test system, this is only if you have test client ).
Use TCode SCC1 in the target system and run it for the customizing transport. check 'Including Request Subtasks' also.
Since customizing transports are at client level, they need to be copied to different clients this way.


Possible Errors
If you get the below error , that means that LBWQ is not cleared.
" Entries for application 13 still exist in the extraction strcuture"
Sol :
Run the V3 job and clear the entries in LBWG


Struct appl 13 cannot be changed due to setup table
Sol :
Delete the setup tables in LBWG.

Running Setup Tables
When the tcode for setup tables are run , you might below error.
"DataSource 2LIS_02_*** contains data still to be transferred"
That means that all the data from ECC queues has not been transferred to BW.
So make sure that LBWQ is empty for the extractor ( most of the cases, the V3 jobs runs regularly every 15/30 minutes, so these queues will be cleared). Next, clear the delta queues in RSA7 by pulling them to BW. Sometimes it you might need to run the infopackages multiple times, if there is activity happening in the source system).
Once LBWQ and RSA7 is empty, you can run the t-code and it will go through fine. 

Adding fields in structure
Step 1
Go to RSA6 -> Click on the datasource and go to the MC* Structure.
If you want to create a new append structure , click on Append Strucutre - New
Else, go to any of the existing append structures and ad your fields there.


Step 2
Open the datasource again in RSA6 and you can see that all fields are available in Hidden. Click in edit and unhide the fields.

Step 3
Next code for extracting these fields should be added in CMOD or BADI implementation whichever the company is using.



Wednesday, July 15, 2015

ECC Tables

ECC Operational Tables

Tables

Description

Tables

Description

CDHDRChange document header ( Any changes to the document will be documented over here )RODELTAMGives Delta properties for different deltas
CDPOSChange document itemsROIDOCPRMSControl parameters for data transfer from the source system - IDOC Configuration
BDCPChange pointer TableROOSSHORTNDataSource Short Name
BDCP2Aggregated Change Pointers (BDCP, BDCPS) - Master data delta changesROOSPRMSCControl Parameter Per DataSource Channel - Delta Initialisations
SEOCOMPOGives the list of all BADI implementations(Classes) and corresponding methodsTRFCQOUTtRFC Queue Description (Outbound Queue) - SMQ1
ROOSGENDLMGeneric Delta Management for DataSources ( Gives details of when the last delta was run)ARFCSSTATEDescription of ARFC Call Status (Send) - SAP Note 378903
TMCEXACTLO Data Extraction: Activate Data Sources/Update ( When activating new LIS datasource check for inconsistencies.)CLBW_SOURCESData Sources for Classification Data
TCDOB/TObjects for change document creation
TRFCQOUT tRFC Queue Description (Outbound Queue)
BWOM_SETTINGSBW CO-OM: Control Data : This table has all FI related control parameters https://blogs.sap.com/2013/04/02/bwomsettings-for-fi-loads-in-sap-bi/
BWOM2_TIMESTBW CO-OM: Timestamp Table for Delta Extraction
BWFIAA_AEDAT_ASFIAA-BW: New and Modified Master Records for Delta Upload
.
.
.
.
.
.
.
.
.


Functional Tables

Tables

Description

Tables

Description

BKPF
BSEG

FAGLFLEXAGeneral Ledger: Actual Line Items
KNA1Customer General DataKNB1Customer Master u2013 Co. Code Data (payment method, reconciliation acct)
KNB4Customer Payment HistoryKNB5Customer Master u2013 Dunning info
KNBKCustomer Master Bank DataKNKACustomer Master Credit Mgmt.
KNKKCustomer Master Credit Control Area Data (credit limits)KNVVSales Area Data (terms, order probability)
KNVICustomer Master Tax IndicatorKNVPCustomer Partner Function key
KNVDOutput typeKNVSCustomer Master Ship Data
KLPACustomer/Vendor LinkMARAMaterial Master: General
MAKTMaterial Master: Short descriptionMARMMM Conversion factors
MVKESales <Sales Org, Distr Ch>MLANMM Sales <Country>
MAEXMM Export LicensesMARCMaterial Plant
MBEWMM ValuationMLGNMM WM Inventory
MLGTWM Inventory typeMVERMM Consumption Plant
DVERMM Consumption MRP AreaMAPRMM Forecast
MARDMM Storage LocationMCH1MM X Plant Batches
MCHAMM BatchesMCHBMM Batch Stock
MARCHMM C Segment : HistoryMARDHMM Storage Location Segment History
MBEWHMaterial Valuation HistoryMCHBHBatch Stocks : History
MKOLHSpecial Stocks from Vendor : HistoryMSCAHSales Order Stock at Vendor: History
MSKAHSales Order Stock: HistoryMSKUHSpecial Stocks at Customer: History
MSLBHSpecial Stocks at Vendor: HistoryMSPRHProject Stock: History
MSSAHTotal Sales Order Stocks: HistoryMSSQHTotal Project Stocks: History
AUFKOrder master dataVBPASales Partner Functions
ADR6E-Mail Addresses (Business Address Services) EmailVBPA2Sales document: Partner (used several times) 
FPLTCPayment cards: Transaction data - SD SalesAEOI    ECH: Object Management Records for Change Master - Change Records - Revisions
AENR    Change MasterNACH    Detailed output data / output Conditions TCode : VV31/32/33
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Wednesday, July 8, 2015

Master Data

Master Data Tables

Basic Notes

Click HERE to get basic information on Master data tables and Architecture


Deleting Master Data

As time goes on some junk/obselete values gets accumulated in the master data tables. The master data which is not being used in any of the infoproviders can be deleted directly from the table with out any issues.


Default : 
If the deletion of master data is selected, system automatically deletes the entries which are no longer used. When this is executed, the entry gets deleted from Ptable(/BI0/PCUSTOMER) of the infoobject , but is still visible in SID table (/BI0/SCUSTOMER) , the following way.

Here , the CHCKFL, DATAFL, INCFL are all blank, because the specific entry is deleted from Ptable, but the SIDs are not deleted.
The reason for not deleting SID value is that, if in future the same customer number arrives as correct data, there is no need for the system spend time generating a new SID value, instead the existing SID value is used.
Howver, if we know the data is incorrect, it can deleted with SIDs.

We can select how we want to delete the master data from below options.

Delete SIDS
When deleting the entries, the system also deletes the corresponding SID values.

Delete Texts
It deletes the texts

Simulation Mode
This simulate the deletion without deleting the actual data in the system

Store Master data Where used list
<need to check>

Search Mode

In the output, it gives information about the data being selected.
When 'O' - if we are deleting a customer, it will give one of many places the specific customer number is being used.
When 'P' - Gives one value with the corresponding each and every infoprovider
When 'E' - One usage
When 'A' - All the places where the customer number is being used.


Program to delete master data
RSDMDD_DELETE_BATCH

The advantage of using this program, is that we can run the deletion of individual entry in the master data using the 'Change Filter' option. Once the program is run, log can be seen in Tcode 'SLG1' ( see below for details).

'Check also NLS' should be selected for running the deletion to happen. ( NLS - Near Line Storage)

 
Running the program with out selecting any option would delete the master data entries which are present in the infoobject but are not being used anywhere with out deleting the SIDs.


Tcode :
Logs after master data deletion : SLG1
Enter below values to run the tcode.
give Object : RSDMD
SubObject : MD_DEL





Usage of SIDs
FM :  RSDDCVER_USAGE_MDATA_BY_SID

Points To Remember

- Making Display Attribute as Navigational Attribute
  • Making an attribute of InfoObject Display to Navigational does not effect other object where it is being used. No other activation are required. Sometimes, the infoobject might not get actived in the first try giving an error : characteristic the attributes sid table(s) could not be filled .
    • Activate the object again and it goes through ( not sure why )

Extended Star Schema

Extended Star Schema

Extended Star Schema has a fact table in the middle surrounded by Dimension tables. The Dimension tables have dimension ids and SID ids. The SID table is the table which connects Dimension table and Actual data (Master Data) Table.
In the below diagram, if we want to know the revenue from a material by a specific customer, Each and every customer number has a SID generated and in the same way, each and every Material has an SID generated.



For each Multi-Dimension Data model, we have A FACT table, which holds all the key figure values. For each Dimension defined, there is a Dimension Table which has Dimension ID and SID ID. For every InfoObject value, there will a SID table generated which connects the InfoObject values to the Dimension values.
When we are defining attributes, there are 3 ways of doing it.
  • We have it as a  characteristic in the attributes
  • Add as an Navigational/Display attribute(more notes later)  where it directly does not reside in the cube but can be called by drilling down
  • As a hierarchy.

*Customer *
 Product *
*Color***
*Sales *
*Fact *
C1
P1
Black
S1
30

Customer, Sales is individual dimensions and Product and Color belong to the same dimension.
When this record comes to BW, There will be SID-tables created for each one of them. Below you can see there are 4 different SID Tables (Highlighted in Yellow) and each of the value has a SID value created (boxed in red)
Once the SID are created, there will be DIM tables. Since Sales and Customer are different Dimensions, there will be separated DIM tables. Product and Color belong to the same Dimension. Hence, there is only one Dim-table.  The values of the DIM tables are filled as shown in the figure below.
        
 Hence the entry is completed written. Now there comes another entry. 
The values are populated as follows.

*Customer *
 Product *
*Color***
*Sales *
*Fact *
C1
P1
Red
S1
40
  
Advantages:

  • The Master Data is kept outside the infocube, which helps in accessing it from other InfoCubes/InfoProviders.
  • Alpha numeric values are converted to SID values (Surrogate IDs) which are numbers , increasing the processing speed.