Sunday, November 7, 2010

Get ready for virtual wireless LANs

One of the primary components of IT that is seeing a lot of investment is virtualization. In most cases, the term virtualization refers to server virtualization. At the Blue Socket seminar Jim was not discussing server virtualization, but wireless LAN virtualization. At first blush, the thought of a virtual wireless LAN seems a bit strange. One obvious question is "how do you virtualize an access point?" The quick answer is that you don't.

Part of the motivation for a virtual wireless LAN is the realization that wireless LANs are becoming an increasingly integral component of the network infrastructure. As such, wireless LANs need to be able to scale, both in terms of the amount of traffic that they can support and the services that they can enable. That is difficult, if not impossible, to do with a centralized architecture that requires that all traffic has to go to a centralized controller. The Blue Socket approach is to separate the data and the control plane in a fashion somewhat similar to the approach taken by the Cisco Nexus 1000V.

The data plane is distributed to the access points and the control plane is located in a centralized computer. For this approach to be successful, you have to be able to minimize the amount of traffic that flows over the LAN or the WAN to the centralized controller. The really interesting thing that Blue Socket does is that they virtualize the controller software and hence sell a virtual wireless LAN. Virtualizing the controller has a number of benefits, including reducing the acquisition cost and making it easier to add capacity as needed.

Serious Linux Kernel security hole

Linux has security problems like any other operating system. Most of them aren't that big a deal though. Many of the more serious ones require local user access to cause any real trouble, and except for Linux desktop users that's not a real concern. The latest Linux security problem with Reliable Datagram Sockets (RDS) though is a real problem.

RDS is an Oracle creation. It's used for sending multiple messages from a single network socket to multiple end-points. The point of RDS is that you can use it to keep inter-process communication (IPC) going without timeouts when a system is running under very heavy loads. Thus, you're most likely to be using RDS if you're running a mission-critical DBMS server or a Linux, Apache, MySQL, PHP/Python/Perl (LAMP) stack application.

VSR Security, the company that found the security hole, reports that Linux kernel, starting from 2.6.30, which was the first to include RDS, could be attacked in by almost any user in a way that would let them become the super-user, aka root. In short, someone coming in over an Internet connection could, in theory, take over a Linux server. This is Not good.

The core problem was that the "kernel functions responsible for copying data between kernel and user space failed to verify that a user-provided address actually resided in the user segment, [thus] a local attacker could issue specially crafted socket function calls to write arbitrary values into kernel memory. By leveraging this capability, it is possible for unprivileged users to escalate privileges to root."

I don't know if it will do that, but I was able to use the exploit code to knock out a SUSE Linux server in my lab remotely. Let me repeat myself: Not good. Others have reported that they've been able to use the exploit code to open up a root shell on Ubuntu 10.04.

For the problem to hit your system you have to have RDS on. Specifically, you have to have the CONFIG_RDS kernel configuration option set. That's usually an option in most distributions rather than a default. Of course, if you really need RDS, you're probably running it on a mission critical DBMS or Web server. That's the last place you want an attack to land. The other necessary condition for an attacker to get at your server is for there to be no restrictions on unprivileged users loading packet family modules. That, I regret to say, is the default on many distributions.

Fortunately, unlike some other operating systems I could name, security holes tend to get fixed really quick in Linux. Linus Torvalds has already issued a fix. You can either compile a new kernel with this fix, which few people do these days, or wait for your up-stream Linux distributor to issue the fix.

In the meantime, I recommend if you're running a Linux server, and you're using RDP, that you log in as root and run the following command:

echo "alias net-pf-21 off" > /etc/modprobe.d/disable-rds

Your server may run slower until the final fix is in, but in the meantime you'll be safe and that's the most important thing.

Three ways of solving data migration problems

Clustered storage represents one of the more significant trends in storage right now and for good reason. Clustered storage gives users a path to follow to avoid one of the more unpleasant realities that storage networks create: storage islands.

In most networked storage configurations, storage islands result when companies purchase multiple storage arrays. Though these storage arrays may communicate and even replicate data between one another, the real problem surfaces when one needs to retire a storage array. At that point, users must almost always schedule an application outage for servers attached to that array since the storage array provides no native, nondisruptive way to migrate data to a new storage array.

To date users have circumvented this in a couple of ways. One was to use volume managers on servers that have the capability to discover logical unit numbers or disks on a new storage array and then mirror the data in real time from the old to the new array. The other was to use a network-based virtualization software that resided between the server and storage array and could perform similar nondisruptive data migration functions.

Clustered storage arrays now give companies a third option in which to perform this task nondisruptively. With clustered storage, all storage arrays in a cluster communicate with one another which allows users to copy or move data between them in real time without requiring server or application outages.

Clustered storage arrays such as those from Isilon Systems Inc. still require companies to standardize on their products to deliver this functionality and do not address concerns about how to manage storage arrays companies may already own. But for companies that want to simplify storage management and standardize on a specific vendor's product, clustered storage arrays provide a viable alternative that avoid a key shortcoming of storage networks.

Data Migration in a SAP Implementation project

Data Migration is one of the key and complex areas in a SAP implementation project. Some of the implementation fails only due to lack of proper data migration strategy in place. If we are going to implement SAP by replacing the source (e.g Legacy) system the objective of data migration is to load the relevant business data currently residing in the source system into the appropriate SAP modules. This data loaded into SAP system needs to be validated with the source data.

So the Data Migration process includes:

  • Extraction of data from the source system
  • Transformation of the data to SAP format
  • Loading of data into the corresponding SAP module

Steps to be followed:

We need to first identify which are the objects we are going to migrate like Contracts, Assets, Business Partner etc and we need to finalize the solution architecture, whether we will be using any intermediate staging environment or not and whether to use an ETL tool for that or not. Also we need to finalize the strategy and approach/ techniques for data loading of each object (whether to use BAPI/IDoc/Batch Input/Direct Input).

The steps to be followed are:·

  • Identification of Source data: Source Data Dictionary
  • Identification of Target data requirement: Target Data Dictionary
  • Finalizing the technique of data loading
  • Mapping of source to target: Mapping document with gaps
  • Resolution of data gaps: By applying business rules
  • Preparation of test plan and data: To be used for unit testing
  • Finalizing Validation and reconciliation strategy: Before uploading the data the data (count, sum of some specific field) should be checked against some control file (reconciliation) and after uploading the data the data loaded into SAP tables should be validated against the source file
  • Development: Data conversion program
  • Developing the validation and reconciliation report
  • Integration Testing: Check for the completeness of the entire data flow
  • Volume testing: To check the performance
  • Transport into Production



Get SAP add-in for QTP 8.2

You can download QTP SAP 8.2 add-in from below link....

Pls specify search keyword as SAP and choose Trial in refine search....

Hope this was useful...!!

https://h10078.www1.hp.com/cda/hpdc/display/main/index.jsp?zn=bto&cp=54_4012
_100__




and also we can go for

QTP 10.0 comes with all add-ins inbuilt.

re-run the set up again

click on "Modify".....and then select SAP add-in.

PS: You need to have admin rights to perform this task.

QTP Tips & Tricks

Data Table

Two Types of data tables
Global data sheet: Accessible to all the actions
Local data sheet: Accessible to the associated action only

Usage:

DataTable("Column Name",dtGlobalSheet) for Global data sheet
DataTable("Column Name",dtLocalSheet) for Local data sheet

If we change any thing in the Data Table at Run-Time the data is changed only in the run-time data table. The run-time data table is accessible only

through then test result. The run-time data table can also be exported using DataTable.Export or DataTable.ExportSheet

How can i save the changes to my DataTable in the test itself?

Well QTP does not allow anything for saving the run time changes to the actual data sheet. The only work around is to share the

spreadsheet and then access it using the Excel COM Api's.


How can i check if a parameter exists in DataTable or not?

The best way would be to use the below code:
code:
on error resume next
val=DataTable("ParamName",dtGlobalSheet)
if err.number<> 0 then
'Parameter does not exist
else
'Parameter exists
end if


How can i make some rows colored in the data table?
  • Well you can't do it normally but you can use Excel COM API's do the same. Below code will explain some expects of Excel COM APIs
code:
Set xlApp=Createobject("Excel.Application")
set xlWorkBook=xlApp.workbooks.add
set xlWorkSheet=xlWorkbook.worksheet.add
xlWorkSheet.Range("A1:B10").interior.colorindex = 34 'Change the color of the cells
xlWorkSheet.Range("A1:A10").value="text" 'Will set values of all 10 rows to "text"
xlWorkSheet.Cells(1,1).value="Text" 'Will set the value of first row and first col

rowsCount=xlWorkSheet.Evaluate("COUNTA(A:A)") 'Will count the # of rows which have non blank value in the column A
colsCount=xlWorkSheet.Evaluate("COUNTA(1:1)") 'Will count the # of non blank columns in 1st row

xlWorkbook.SaveAs "C:\Test.xls"
xlWorkBook.Close
Set xlWorkSheet=Nothing
Set xlWorkBook=Nothing
set xlApp=Nothing


SMART Identification

Smart Identification is nothing but an algorithm used by QTP when it is not able to recognize one of the object. A very generic example as per the

QTP manual would be, A photograph of a 8 year old girl and boy and QTP records identification properties of that girl when she was 8, now when

both are 10 years old then QTP would not be able to recognize the girl. But there is something that is still the same, that is there is only one girl in

the photograph. So it kind of PI (Programmed intelligence) not AI.

When should i use SMART Identification?

Something that people don't think about too much. But the thing is that you should disable SI while creating your test cases. So that you

are able to recognize the objects that are dynamic or inconsistent in their properties. When the script has been created, the SI should be enabled,

so that the script does not fail in case of small changes. But the developer of the script should always check for the test results to verify if the SI

feature was used to identify a object or not. Sometimes SI needs to be disabled for particular objects in the OR, this is advisable when you use

SetTOProperty to change any of the TO properties of an object and especially ordinal identifiers like index, location and creationtime.


Descriptive Programming

Descriptive programming is nothing but a technique using which operations can be performed on the AUT object which are not present in

the OR. For more details refer to http://bondofus.tripod.com/QTP/DP_in_QTP.doc (right click and use save as...)


Recovery Scenarios

What is a Recovery Scenario?

Recovery scenario gives you an option to take some action for recovering from a fatal error in the test. The error could range in from

occasional to typical errors. Occasional error would be like "Out of paper" popup error while printing something and typical errors would be like

"object is disabled" or "object not found". A test case have more then one scenario associated with it and also have the priority or order in which it

should be checked.


What does a Recovery Scenario consists of?

Trigger: Trigger is nothing but the cause for initiating the recovery scenario. It could be any popup window, any test error, particular state

of an object or any application error.

Action: Action defines what needs to be done if scenario has been triggered. It can consist of a mouse/keyboard event, close application, call a

recovery function defined in library file or restart windows. You can have a series of all the specified actions.

Post-recovery operation: Basically defined what need to be done after the recovery action has been taken. It could be to repeat the step, move

to next step etc....


When to use a Recovery Scenario and when to us on error resume next?

Recovery scenarios are used when you cannot predict at what step the error can occur or when you know that error won't occur in your

QTP script but could occur in the world outside QTP, again the example would be "out of paper", as this error is caused by printer device driver. "On

error resume next" should be used when you know if an error is expected and dont want to raise it, you may want to have different actions

depending upon the error that occurred. Use err.number & err.description to get more details about the error.



Library Files or VBScript Files
How do we associate a library file with a test ?

Library files are files containing normal VBScript code. The file can contain function, sub procedure, classes etc.... You can also use executefile

function to include a file at run-time also. To associate a library file with your script go to Test->Settings... and add your library file to resources

tab.


When to associate a library file with a test and when to use execute file?

When we associate a library file with the test, then all the functions within that library are available to all the actions present in the test. But

when we use Executefile function to load a library file, then the function are available in the action that called executefile. By associated a library to

a test we share variables across action (global variables basically), using association also makes it possible to execute code as soon as the script

runs because while loading the script on startup QTP executes all the code on the global scope. We can use executefile in a library file associated

with the test to load dynamic files and they will be available to all the actions in the test.




Add-ins

Test and Run-time Object
What is the difference between Test Objects and Run Time Objects ?

Test objects are basic and generic objects that QTP recognize. Run time object means the actual object to which a test object maps.
Can i change properties of a test object
Yes. You can use SetTOProperty to change the test object properties. It is recommended that you switch off the Smart Identification for the

object on which you use SetTOProperty function.

Can i change properties of a run time object?
No (but Yes also). You can use GetROProperty("outerText") to get the outerText of a object but there is no function like SetROProperty to

change this property. But you can use WebElement().object.outerText="Something" to change the property.


Action & Functions
What is the difference between an Action and a function?

Action is a thing specific to QTP while functions are a generic thing which is a feature of VB Scripting. Action can have a object repository

associated with it while a function can't. A function is just lines of code with some/none parameters and a single return value while an action can

have more than one output parameters.


Where to use function or action?

Well answer depends on the scenario. If you want to use the OR feature then you have to go for Action only. If the functionality is not about any

automation script i.e. a function like getting a string between to specific characters, now this is something not specific to QTP and can be done on

pure VB Script, so this should be done in a function and not an action. Code specific to QTP can also be put into an function using DP. Decision of

using function/action depends on what any one would be comfortable using in a given situation.



Checkpoint & Output value
What is checkpoint?

Checkpoint is basically a point in the test which validates for truthfulness of a specific things in the AUT. There are different types of

checkpoints depending on the type of data that needs to be tested in the AUT. It can be text, image/bitmap, attributes, XML etc....

What's the difference between a checkpoint and output value?
Checkpoint only checks for the specific attribute of an object in AUT while Output value can output those attributes value to a column in data

table.

How can i check if a checkpoint passes or not?
code:
chk_PassFail = Browser(...).Page(...).WebEdit(...).Check (Checkpoint("Check1"))
if chk_PassFail then
MsgBox "Check Point passed"
else
MsgBox "Check Point failed"
end if

My test fails due to checkpoint failing, Can i validate a checkpoint without my test failing due to checpoint failure?
code:
Reporter.Filter = rfDisableAll 'Disables all the reporting stuff
chk_PassFail = Browser(...).Page(...).WebEdit(...).Check (Checkpoint("Check1"))
Reporter.Filter = rfEnableAll 'Enable all the reporting stuff
if chk_PassFail then
MsgBox "Check Point passed"
else
MsgBox "Check Point failed"
end if

Environment
How can i import environment from a file on disk

Environment.LoadFromFile "C:\Env.xml"
How can i check if a environment variable exist or not?
When we use Environment("Param1").value then QTP expects the environment variable to be already defined. But when we use

Environment.value("Param1") then QTP will create a new internal environment variable if it does not exists already. So to be sure that variable exist

in the environment try using Environment("Param1").value.


How to connect to a database?

code:

Const adOpenStatic = 3
Const adLockOptimistic = 3
Const adUseClient = 3
Set objConnection = CreateObject("ADODB.Connection")
Set objRecordset = CreateObject("ADODB.Recordset")
objConnection.Open "DRIVER={Microsoft ODBC for Oracle};UID=;PWD=
"
objRecordset.CursorLocation = adUseClient
objRecordset.CursorType = adopenstatic
objRecordset.LockType = adlockoptimistic
ObjRecordset.Source="select field1,field2 from testTable"
ObjRecordset.ActiveConnection=ObjConnection
ObjRecordset.Open 'This will execute your Query
If ObjRecordset.recordcount>0 then
Field1 = ObjRecordset("Field1").Value
Field2 = ObjRecordset("Field2").Value
End if

How can I save the changes to my DataTable in the test itself?

Well QTP does not allow anything for saving the run time changes to the actual data sheet. The only work around is to share the

spreadsheet and then access it using the Excel COM Api's.