Wednesday, July 27, 2016

LO Cockpit change warning message

Extract structure MC11VA0ITM generated successfully, see long text

    Notifications MCEX 027

Diagnosis

    The extract structure was generated successfully.

Procedure

    Now choose the maintenance screen of the DataSource. After that, the
    status display (traffic light) jumps from red to yellow.



    Please make note of the following before you transport the changed
    structure MC11VA0ITM into another system:

    o   Run the transport when the target system is not being booked.
        Otherwise you will need to initialize it because documents are lost
 during this time.

 None of the clients in the target system in the V3 update for the
 application 11 should contain data. If you are unsure, start the V3
 update of the application 11 in all clients.

 If there is still data in the central delta management of the target
 system, it must be retrieved by BW before the transport takes place.

 If you have already reconstructed within this target system, it may
 be that the data still exists in the reconstruction tables. After
 the transport, you can no longer transfer these entries into BW. You
 must delete the contents of the reconstruction tables of the
 application 11 in the target system and run another reconstruction
 if need be.

 If there is an update log in the target system, you cannot read the
 log data of the application 11 after the transport has been run. You
 can only read it again once data is booked and the last log entry
 overwritten.

 Use the report RMCSBWCC to display a log for the changed extract
 structure in the target system, to see if any of the above problems
 exist. An additional switch deletes all update logs for the application
 of the selected extractor.

 Please also note that you must reactivate the transfer structures when
 you change the extract structure after replicating a DataSource.

----------------------------------------------------------------------------------

Update activated                      -> Always observe long text

    Notifications MCEX 146

Diagnosis

    You have activated the extraction of an extraction structure.

    By setting the V3 update to active for the affected application, update
    entries that contain "collective processing" modules are created for
    every document change.

    These cannot be processed automatically either immediately or later, as
    is possible for a V1 or V2 update. You have to start the process for
    these update entries explicitly. For logistics extraction, you can set
    the time that you want to begin the explicit start-up using the function
    "Job control".

    If changes are made to the ABAP dictionary objects used in the update
    module between the time of creating an update entry and the V3 update


process, the update entries can no longer be updated and an update
termination ensues.
In this situation you can only delete the update entry. The data
belonging to it is lost.

If you chose the update mode 'Queued Delta' for the affected
application, whenever a document change is made, entries are generated
in an extraction queue.

These entries are not processed automatically. You have to explicitly
trigger processing of these entries in the extraction queue. For
logistics extraction, you can set the time point for this explicit
start-up using the Job Control function.

If changes are made to the ABAP dictionary objects used in the module
between the time when an entry is generated in an extraction queue and
the processing of this entry, this queue entry can no longer be
processed, and causes a runtime error.
If this happens, you can only delete the queue entry - the data that
belongs to it has been lost.
Procedure



    You must inform your system administrator about this, so that they can
    ensure in the future that no open update entries or entries in the
    extraction queue (under Log.-Queue Overview in the system before the
    following activites are performed:



    o   R/3 support package import

    o   R/3 Upgrade

    o   PlugIn support package import

    o   PlugIn Upgrade

Transformations and Routines

Global Variable in BW Routine
If you want data of a certain variable or internal table to be retained in all the infopackages with out being deleted or selecting them again. eg: adding a counter for all records,
we need to declare them as global variables.
The declaration is same as any other abap variable or internal table ( DATA: lv_flag TYPE c ), but place where you declare them makes all the difference.

In the routine, these should be declared right on the top, after the generated structure definition. system indicates the place.

*$*$ begin of global - insert your declaration only below this line  *-*
DATA gv_flag TYPE C.
*$*$ end of global - insert your declaration only before this line   *-*

Transports

Reassigning Task
This is used to reassign a task from one transport to another.


Merge Transport
In the above screenshot, select merge requests. This happens at transport level.
All tasks from the transport which is being merged are moved to the other transport and the original transport is deleted from the system.

Moving customizing transport
Release the task under the transport
Open Target Client
Go to T-Code SCC1
Give the client and transport number
Execute
The changes will be see in the target client

Simulate Release
when releasing a transport, we can use 'Simulate Release' to check if the transport is going to throw any errors when importing to the target system.













Creating Transport of Copies (ToC)

SE09

Please note that objects cannot be collected directly in a ToC. 
Collect the object in another transport.
Go to the ToC , Edit mode and include the objects manually from the transport to the ToC and lock the objects in ToC.
Release the Toc.
Add to the current system buffer and forward it to the target system ( steps below)


Adding transport to Buffer to import in another

Go to the source system  A and look for the transport in STMS.
If it it not there , we need to add the transport to the system buffer first before forwarding to the other system.
Go to Extras -> Other Requests -> Add 

Here add the transport again. ( need to tick the check box )
This will add the transport to the buffer.


Now forward the transport to the target system B.

This will add the transport from source system A to Target system B.
Now go to STMS in Target B and click on the transport and Import.

you can delete the transport from A buffer, if you want to.

Reimporting Failed Transport in STMS

There is no repeat in STMS ( I think ).
Add the transport again to the buffer and import it.

Function Module to import transport

TMS_TP_IMPORT

Automating Transport Imports


Friday, July 22, 2016

UD Connect

UD Connect 

Universal Data Connect

UD connect in layman terms
- UD Connect is the 'interface' which connects to different source systems i.e different servers and various databases inside them.
For eg : we want a table from MS SQL server, SQL server has various databases and each database can have number of tables inside them.
We create a source system in BW , under UD Connect and give the table name.
UD Connect is a different system, namely DEV1 etc. where the configuration is done. Inside the system, it uses JDBC connector to extract data.

---> We trigger the infopackage in BW (DEV1)
---> The request goes to UD Connect (DX1)
---> This system has different drivers configures inside it. ( ODBC drivers, JDBC[Java Data Base Connectivity] Drivers ). It uses the source server/source database/login details/driver information stored here and enters the source system
---> Fetches data and returns to BW. ( more detail in MS SQL to BW link below)


- Java is the programming language.
- It is always FULL load from UD Connect.
- UD connect transfers data as a flat file. The multi-dimensional data is also converted to flat file using this.

MS SQL to BW using UDConnect

When data needs to be read from 3rd party systems ( mostly MS SQL server ), UD Connect is used to fetch the data.BW developer needs to be provided with information of the server from which data needs to be extracted.
Sample information as follows.

Connection string:
ODBC;DRIVER=<sql server> ; SERVER=<server name>;UID=<information>;APP=2007 Microsoft Office system;DATABASE=<database name>;UID=<user_id>;Pwd=<password>

Along with the connection string, ID and password for the server should also be provided. These details are used to login to the server.

In general, along with the connection string, the list of tables from which data needs to be provided is also given from which data has to be extracted.



Basis team, in general would validate and create the connection to the server system ( Application Server J2EE Connectivity).They would also give the name of the Source system created , which will be used in BW for creating the source connection in BW.

The connection should be created in the system being used ( Dev, Q, Sandbox, Prod etc)
The below screen shot is from DX1 system. This system is the connection system between Dev1 and SQL server from where the data is to be extracted.
The DX1 system has all the drivers installed in them for the connectivity. UD Connect mostly connected to Microsoft SQL server and based in the version of source SQL server, different ODBC java drivers are to be used.

The DX1 system needs to be restarted once the change is done.



BW Side changes
Creating Source System in BW
Once the connection is created by Basis,

RFC Destination : UDCONNET
Logical System Name : Name you want to assign in BW
Type of Connector : JDBC
Name of the Connector : Logical name given by Basis when creating the connection / source ( DX1)

Source System Name :  not sure, I give same as Logical system name and it works
Source System Type and Release : not sure and can be left blank.

Once the source is created, check and activate the system. Options can be found when right-clicking on the source.

Once the connection is created successfully, an entry is added in table RSLOGSYSUDC with the Logical system name and the UDC connection name.


Create Datasources under the source system
  • Individual datasources are created for each and every table on the server in the 3rd party system from which we wanted to extract data.(I think)
  • Along with the connection information, the names of the tables in the database from which data is to be extracted are also given.
  • The table name is given in datasource 'Extraction' tab - UDC Source Object.



Common Errors / Notes

Any time a new source system is added, entry must be added to table RSLOGSYSMAP, in the target system which provides the mapping of the source system to the target system where the transport is being moved.


Some Errors


  • UD Connect connection data is not correct RSAR 397 ( BI Java connector 'TEST_UD ' is entered for source system TEST_UD . This is not correct. Enter the correct BI Java connector in table RSLOGSYSUDC )
    • This might be because the udconnect was not created properly in the system. These connections should be created separately in all the systems ( dev, q, and prod ) where the data needs to be extracted. Ask basis to check the connection.
  • S:RSSDK:300 Field PLANT01 is not a member of /BIC/CAZTEST _DS000010000001
    • This happens when the datasource is defined initally with certain fields and new fields are added later.
    • Delete the PSA before changing the datasource.
    • I stopped using the datasource and created a new datasource with all the fields. There should be a better way to do this, but this is how I did. Should mention that I didnot delete the PSA before changing.
  • STRING/XTRING types used Not supported by currently selected access method UDCGEN; cannot use this method . Currently used adapter UDCGEN is invalid. Reason: STRING/XTRING types used.
    • Some of the fields might have RSTR type  and this is not supported throught UDCONNECT. The alternative is to ask the type of the field changed in the source.


Other Information
ODBC and JDBC
ODBC is an open interface which can be used by any application to communicate with any database system, while JDBC is an interface that can be used by Java applications to access databases. Therefore, unlike JDBC, ODBC is language independent. But by using JDBC-to-ODBC bridge Java applications can also talk to any ODBC compliant database.

Thursday, July 21, 2016

Master Data Notes

The dependent attributes of the characteristics are stored in separate tables called as Master Data Tables. The Master data tables are not directly linked to InfoCubes. Master Data tables can be of two types. This can be marked to individual attributes in the MDT.
-          Time Independent MDT
o    All the non time dependent attributes are placed under this table
§  /BIC/P<InfoObject>
-          Time Dependent MDT
o    Time dependendency of an attribute allows you keep track of changes to it. The time dependent attributes are stored in Q Tables. Only the attributes which are timedependent are present in this table.
§  /BIC/Q<InfoObject>.

Master Data Text : 

Textual Descriptions are saved in Text Tables.

Hierarchy :

Hierarchies of characterstrics may be stored in separate hierarchy tables.

SID Tables

Sid tables play an important role connecting the data warehouse information to the InfoCubes and the DSOs. To speed up access to InfoCubes and DataStore Objects and to allow an independent master data layers, each characteristic and attribute is assigned a SID column and their values are encoded into 4-byte integer values.
-          The SID tables are generated always.
o    /BIC/S<InfoObj>
-          Non Time Dependent – Navigational Attribute SID Table
o    This is generated when there are navigational attributed to the InfoObject
§  /BIC/X<InfoObject>
-          Time Dependent –Navigational SID Table
o    This is generated when there are time-dependent attributes
§  /BIC/Y<InfoObject>

Points-To-Remember.
If the InfoObject is Maser data, All the SID tbales are automatically generated when the Master Data is being loaded.
The SID for transactional data gets generated depending on DSO settings.
- During DSO Activation
- During Reporting
They can also be maintained during Infocube load if ‘No Referential Integrity’ check is enforced in Infopackage.

Dimension Tables

Activation of Infocube creates Dimension Tables. The columns of a dimension table are not the characteristics themselves but the SIDs of the characteristics you have chosen to be members of the InfoCube dimension (table). The unique key of a dimension table is the dimension ID (DIM-ID) that is a surrogate key (integer 4).
In the BI data model a surrogate key is used as a unique key with each dimension table and not the real most granular characteristic within the dimension. For example, for each unique combination of SID values for the different characteristics within a dimension table there is a unique surrogate key value assigned. The dimension tables are joined to the fact table using surrogate keys in BI.
There are total of 16 dimensions allowed in an Infocube. 3 pre-defined ones and 13 user-definable.

Time Dimension

Unit/Currency Dimension

Packet Dimension.

With every load into an InfoCube there is a unique packet-ID assigned. This allows you to purge erroneous loads without recreating the whole InfoCube again. The packet dimension can increase overheads during querying and can therefore be eliminated using the compress feature of the InfoCube after proven correctness of the loads up to a certain packet-ID.

Fact Tables

This is also created during the Infocube activation. They have different DIM ids (and SID ids , incase of Line dimension ) with the corresponding facts.
-          Each row in the fact table is uniquely identified by a value combination of the respective DIM-IDs / SIDs of the dimension / SID tables
-          Since the BI uses system-assigned surrogate keys, namely DIM-IDs or SIDs of 4 bytes in length per dimension to link the dimension / SID tables to the fact table, there will normally be a decrease in space requirements for keys in comparison to the use of real characteristic values for keys.
-          The dimension / master (SID) tables should be relatively small with respect to the number of rows in comparison to the fact table (factor 1:10 / 20).
There are 2 different fact tables created for Infocube.
-          F Fact Table
o    When the data is loaded to the Cube, it will be present in the F Fact Table, with the corresponding request no. We can delete the requests when the data is present here. Once the data is compressed, it is moved to E Fact Table.
o    The F-table uses b-tree indexes
-          E Fact Table
o    The compressed data is present here. When the functionality of ‘Compress’ is done for the request, then the request is merged with the data present and the request id is set to 0. No deletion of data is possible based on request numbers as they are nullified.
o    Since the data is in compressed form, it needs less space.
o    E-Table uses bitmap indexes
o    Suppose there is only one customer with C100 is doing Transactions & in 100 requests there are 100 records. Then when you eliminate the request all records are aggregated to 1 record. The key figures are summed up.

o    If there are 100 different customers doing individual transactions, and its compressed, it still has 100 records as there are different customer numbers.

Tuesday, July 19, 2016

Infosets

Infosets

10 Useful Tips on Infoset Queries

MultiProviders and Infosets

DELETE






Classification Datasources


Classification Data sources

http://scn.sap.com/docs/DOC-28359.pdf

Routines

BW Routines

DTP Routine
When a routine is created in BW, the below code is generated.
datal_idx like sy-tabix.
          read table l_t_range with key
               fieldname ''.              " Insert the name of the filter field
          l_idx sy-tabix.
          if l_idx <> 0.                    " Ignore the if condition,and populate the range table manually.
            modify l_t_range index l_idx.
          else.
            append l_t_range.
          endif.
          p_subrc 0.


Range table
     l_t_range-fieldname 'CRM_OBJ_ID'.
     l_t_range-option 'EQ'.
     l_t_range-sign 'I'.

LOOP AT lt_objid INTO lw_objid.
     l_t_range-low = lw_objid-object_id.
     l_t_range-low '12345'.
     APPEND l_t_range.
     CLEARlw_objid.
ENDLOOP.

Whatever values are appended, will be present in the DTP filter at time of execution.


InfoPackage Routine Code
Below is the code which gets generated, once Infopackage routine is created. We need to fill the range table with the values we want as filters.
\


In the above code, after the read statement , fill in the code for filling the range, depending on the type of range.
     l_t_range-    fieldname 'ZZTIME'.
     l_t_range-option 'EQ'.
     l_t_range-sign 'I'.
     l_t_range-low = '12345'
     
After this, the MODIFY statement in the code will take care of the modification.


System Commands

Date : Sy-datum

Time : sy-timlo

Logical System : 

Thursday, July 14, 2016

Table Maintainance

Tcodes
SE11 : Viewing Table
SE16 - Viewing Table Data
SM30 - Table Maintenance


Creating Maintenance View
Go to SE11 --> Utilities --> Table Maintenance Generator
It takes you to the below screen.

Technical Dialog Details
Enter 'Authorization Group' 'Function Group' and Package(gets added directly based on function group) .

Maintenance Screens
We can type how steps we need while creating entries. You can choose either One step or Two steps. check them individually and see which satisfies your needs better.

Assign the overview screen and single screen numbers. One step needs onyl Overview screen number.  You can use 'Find Scr. Number(s) from the top of the screen to check the previously used numbers and available ones.
Each Function group has its own screen number.

Dialog Data Transport Details
***Use the default.

Click on create and the table maintenance generator is created.
Go to SM30 and add entries.


Note
When creating the table, 'Delivery and Maintenance' should be 'Display/Maintenance Allowed'



Whenever you are adding a new field / removing field, Table Maintenance needs to be regenerated.
      


Common Warnings/Errors

  • SM30 gives error after moving transport to different system
    • When table maintenance is generated, it also creates 2 FMs under a function group and if the function group entries are moved, system doesn't allow us to maintain values.
  • View/table ZTEST_TABLE can only be displayed and maintained with restriction
    • When creating the table, 'Delivery and Maintenance' should be 'Display/Maintenance Allowed'

Transporting table entries ( when transport is not asked by default )

Go to SE09
Select you task and enter as follows
Program ID : R3TR
Object Type : TABU
Object Name : <Table name>
 

Click on (Object with Keys ) icon and it takes you to the screen with table entries

You can entry under table Keys. The main screen only gives the key field value to be entered. Internally, it collects the whole entry.


If you want to check the whole entry, click on table entries you can actually select the how you want to see the entries.

Once the entries are entered, you can see them all under the task. The table entries cannot be locked by the transport unlike programs and other other objects.



Wednesday, July 13, 2016

READ_TEXT

Short texts are different from texts. These are saved mostly at transaction data. As there are different types of texts for each document ( eg : purchasing document ), SAP saves them in encrypted form.

'READ_TEXT' Function module is used to read this data. For using the FM, we need to know certain details of the texts we want to read.

The details can be found in below tables.
STXH - STXD SAPscript text file header
STXL - STXD SAPscript text file lines - This table has data encrypted. So need to use the FM to read it.


They can also be found using Tcode
SE75 - SAP Script Settings
















Below is the information needed to run the FM

OBJECT :
Go to Tcode SE75 and run it for ' Text Objects and IDs'
In Object ID , look for the base table for the text you are looking for.
For eg : Purchasing EKKO, EKPO;
             Sales Order VBBK, VBBP

NAME
This is the Document number ( with or with out item ) you are considering.
For eg : I need Shipping Instructions for Sales Document. So the Sales Order number is the NAME

ID
Once you know the OBJECT name in SE75, double click on the object name and it should give you all the available texts with its IDs. Get the ID you are looking for.
Eg : 001, 002 etc.

LANGUAGE
Language you need the text in.

You can also check entries in table 'STXL'  for a given document number. You get all the text maintained for it with it codes. you can double check it against the Tcode IDs

Once the FM is executed, LINES table should give you the text maintained. It will come in either single or multiple lines depending on how the text is maintained.

*** Reading the collection text
CALL FUNCTION 'READ_TEXT'
  EXPORTING
*   CLIENT                        = SY-MANDT
    ID                                   'ST'
    LANGUAGE                 'E'
    NAME                           lv_name
    OBJECT                        'TEXT'
*   ARCHIVE_HANDLE  = 0
*   LOCAL_CAT               = ' '
* IMPORTING
*   HEADER                      =
  TABLES
    LINES                            lt_line
 EXCEPTIONS
   ID                                    1
   LANGUAGE                  2
   NAME                            3
   NOT_FOUND                4
   OBJECT                          5
   REFERENCE_CHECK               6
   WRONG_ACCESS_TO_ARCHIVE       7
   OTHERS                        8
          .

User Roles, Security, Profiles Etc


Roles : They are used to administer security features for various users in SAP.

Composite Roles : They are roles which have individual roles with in. Parent role is a composite roles and there are individual roles inside each composite role.

Standard Programs
IAM_API_TESTFRAME’ for dealing with user and corresponding roles
RSUSR002 : Gives details of users and roles

T-Code
SU01 : User Information
SUIM - User Information System

Tables
USH02            - Change history for logon data
AGR_AGRS   - Roles in Composite Roles
AGR_USERS - Assignment of roles to users

BAPIs
BAPI_USER_CREATE1
BAPI_USER_ACTGROUPS_ASSIGN



Reference Notes
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/668e6629-0701-0010-7ca0-994cb7dec5a3?overridelayout=true



Creating Users in SAP
BAPI_USER_CREATE1
The BAPI needs the following mandatory parameters.
*& Passing the mandatory parameters
*& LogON Data
CLEARlw_logondata.
lw_logondata-gltgv sy-datum.    " User Valid From
lw_logondata-gltgb '99991231'.  " User Valid To

*& Password
CLEARlw_password.
lw_password-bapipwd 'Welcome123'.

*& Address Table
CLEARlw_address.
lw_address-lastname = <user lastname>.

REFRESH lt_param[]lt_return[].

 CALL FUNCTION 'BAPI_USER_CREATE1'
   EXPORTING
     USERNAME     = <userid>
     LOGONDATA    lw_logondata
     PASSWORD     lw_password
     ADDRESS      lw_address
  TABLES
    PARAMETER     lt_param
    RETURN        lt_return

Assign Roles
BAPI_USER_ACTGROUPS_ASSIGN
  CALL FUNCTION 'BAPI_USER_ACTGROUPS_ASSIGN'
    EXPORTING
      USERNAME             <userid>
    TABLES
      ACTIVITYGROUPS       lt_roles
      RETURN               lt_return
            .

Delete Users
 CALL FUNCTION 'BAPI_USER_DELETE'
  EXPORTING
    USERNAME       <userid>
  TABLES
    RETURN         lt_return
          .

           .

IDocs and TRfc

Idocs  and TRfc


IDoc
Intermadiate Document
An IDoc (intermediate document) is a standard data structure for electronic data interchange (EDI).
IDocs are used for the data interchange between SAP systems as well as between an SAP system
and an external system. IDocs serve as the vehicle for data transfer in SAP’s Application Link
Enabling (ALE) system. The contents, structure, sender, receiver, and current status of the IDoc are
defined in the IDoc header.

Data extraction from an SAP R/3 source system to BW begins when BW sends a request in the form of a request IDoc.
The source system then extracts the data and sends it to the BW system.

REFER LINK

Transactional RFC (tRFC)
For each transaction running in the system, there will be LUWs generated. These can be seen in SM58.


Scenarios
Data not coming to BW - Request is in Yellow status
This might happen, when the TRfc are struck in the source system .

Take the IDoc number and go to Tcode - WE02 and display the Idoc to see if there are any errors.

Also, go to SM58 and run the tcode. It will give you list of TRfc present in the system. They can either be due to some errors or just hanging there as there are no processors to process them. You can execute these from here , so that they reach the destination.

Maintaining Languages
Tcode : WE03
In the infopackage, we can see the incoming and outgoing idoc numbers and if we run WE03, we can see the logs of the request.
In language infopackages, even though we cannot see the SPRAS selection, if we check the iDoc, we can see the languages being used in the filter


Language setting in BW

Master T-code for Language settings
I18N

TCodes : I18N ; SMLT ; RZ10 ; RZ11

Table : TCPOI


Report Program : RSCPINST

This is the ‘NLS Setting Maintainance’ program, where we can add new languages to the system.



Click on ‘Add’ and add the languages.
Sometimes, the F4 help would not give the language you intend to add. In that case, the language list needs to be extended using ‘Extend Language List’ where it takes to a new screen and new language can be added.







 SMLT Tcode

Once that step is done, you can see the new language in the F4 help and add it. Save and activate it, and the system would update all the database table entries with the new language setting. Then the paramtere ‘zca/installed_languages’ needs to be maintained manually.

RZ10
This is where profiles for the system are maintained. Each profile has number of parameters to be maintained.
For adding language the parameter would be ‘zca/installed_languages’. System needs to be restarted once the change is done.




Corresponding SAP Notes

112065 - Using customer language 'Z1'
42305 - RSCPINST (I18N configuration tool)
2185213 - Configuration of logon languages and profile parameter zcsa/installed_languages
1345121 - Profile Parameters which should not be set in Unicode
529789 - BW extraction/extractor checker differences ( Point 10)
73606 - Supported Languages and Code Pages


Billing

Datasources
  • 2LIS_13_VDHDR
  • 2LIS_13_VDITM
  • 2LIS_13_VDKON
TCode
VF01 / VF02  / VF03

Base Tables 
VBRK ( Billing Document: Header Data )
VBRP ( Billing Document: Item Data)
VBUK 
VBUP
KOMV / KONV [ KOMV is only a strucutre ]

Bill Plan Header  : FPLA
Bill Plan Item      : FPLT 


Set up tables
OLI9BW / RMCVNEUF
Can be run open with out any selection. Has Billing document on the selection screen

Related SAP Notes
SIS/BW: Statistics update for value items, quantities HERE

Creating Billing Document / Change to Bill Plan
Here you can create billing documents for a sales order ( I am doing this for bill plan related stuff )
Go to VF04 - Enter sales document and billing date.
check order type check box in the end.
it will display an entry
save it
this will create a billing document.



2LIS_13_VDITM

Billing Item
Base tables : SAP Link

FKIMG and ZFKIMG / OLIME / NETWR and ZZNETWR
My Notes* ( not sure if below is the right explanation )
In VBRP, we can see FKIMG populated irrespective of POSAR, but in 2LIS_13_VDITM, FKIMG does not get populated for Value Items. SAP Note 368011 will solve the issue.

Basically, for value item OLIME is the field which is populated. This field is not from any table, but a structure which gets populated internally ( not sure where ).
So if you implement Note - solution 2  values from OLIME or FKIMG are moved to ZFKIMG which can be used in BW.

Check code in below function module to see the assignment
EXIT_SAPLMCS6_002
ZXMCVU06




2LIS_13_VDKON

Base table : VBRK
                    VBRP
                    KONV ( In LBWE structure it is KOMV, but KOMV is only a structure)

This datasource works as Item change date as pointer. It will get all new/changed records at item level.

Only Active conditions are considered. KONV-KINAK should be blank.
If Note 1062462 is implemented, Inactive conditions can be extracted, but there are some other other side effects as mentioned in the note - Not sure how they would effect.
Personal experience : In the user exit for the extractor, I added code to add Inactive records to the standard extractor.




BW House Keeping


  • Delete entries from PSA regularly
  • Delete entries from change log regularly
  • Use RSREQREDUCE tcode to monitor the growth of infoproviders