Windows app control

Introduction

Controlling windows app can be useful for automation of processes or for testing. This article provides more insides on possible ways to control application in windows by build-in methods, that are:

  • SendKeys mechanism – that simulates sending keystrokes to the current application
  • Windows GUI
  • IE OLE

Send Key

This way allows to send keystrokes to virtually any active windows application.  The downside is, that the active application can change anytime, so the keystrokes will go to another application.

Windows GUI

It is specific to a given application. It requires knowledge of the application, the GUI components. The components can be addressed by:

  • Windows handler that is globally unique but is dynamically assigned by the operating system.
  • programs ID that is assigned at programing/compilation time and should be unique and remains constant for an application.

Addressing by program ID is more recommended.  Please use tools like Spy++ (or write your own one based on the example) in order to check application’s GUI components.

IE OLE

The method can be used to control HTML documents loaded by IE. It assumes knowledge of the GUI object  “Internet Explorer_Server” and OLE (see Object Linking and Embedding) object ID “626FC520-A41E-11CF-A731-00A0C9082637” that represents the internal HTML document.

Using OLE for controlling HTML pages is more complex due to the nature of HTML.

The example

It requires .Net and has been tested with Win 7 and 8.1

Included examples:

  1. RunKeys with notepad
  2. RunKeys with calculator
  3. GUI with calculator
  4. IE OLE (it assumes only one IE instance with one tab is opened)

The program waits after each example until the current windows will be closed.

Check this VB source code example: WinControlExample

Add mshtml to the project for compilation.

MS_mshtml

 

Advertisements

Organization of work

The process of configuration management for a database is more complex due to the nature of a database and the complexity of delivering database upgrades. It makes the organization of work even more important. Considering the high cost of resolving database merge and especially merging conflicts it makes important to pay close attention to organization of work in order to make the configuration management process more efficient. Here are some recommendations:

  • Make teams of about 5-10 developers that work close to each other and can communicate verbally when having a question or can resolve any doubts that might arise. Good communication with promptly answers is of key importance for complex processes.
  • Divide all aspects of a database development into a set of areas according to the functionality and technical difficulty.
  • Make sure, that there is about 3-4 areas per developer. Assuming 5-10 person teams it divides a database into about 15-40 areas.  Then assign to every developer 3-4 areas of specialization, communicate and make the map of specialization available for the team.
  • Don’t change the assignments too frequently but keep it so the developers can learn and become experts in the assigned areas.
  • Make sure, that every area has at least two assignee: one the primary and another for backup. That will open more possibilities to assign tasks and help resolve issues e.g. when somebody is on vacations.
  • Assign work to developers according to the areas of expertise. When possible avoid assigning new tasks to developers that they are not familiar with unless it is a learning process.
  • Avoid scheduling multiple changes in the same area/component (e.g. package or table) at the same time.  In another words, the global perspective of changes made to the same area should reassemble serial order.
  • On the other hand changes that belong to different areas can and should be coded parallel when possible, that is when the changes aren’t conflicting – speaking in technical terms.
  • Perform merges of changes to other branches as soon as all changes in the given area are coded and verified.  Sure that is risky and would cause much work in case the changes should be recalled. Fortunately that doesn’t happened very frequently. On the other hand, it is more probable, that late merges will expose a project to potential merge conflicts that will cost the team much more work and time.

The consequences

  • It is not recommended, to make multiple changes to the same component simultaneously  in parallel branches by different developers.  The cost of merges might be much higher than the cost of making the changes in serial ordering. And the cost of merges is not specific only to a particular tool or repository (e.g. SVN or GIT) but mostly to the nature of a database. So even if there are two developers who potentially can work simultaneously on two different changes regarding the same area it is better to make the changes serial to save merging costs.
  • Should there be many changes in the same area and some of them should delivered sooner than the rest, make the more urgent changes a separate branch.  Start the work in the branch first and schedule work on the remaining changes in another branch after the urgent changes are coded and verified. This way, the preferred changes will be delivered first.
  • The goal is to avoid merge conflicts so merges become as easy as making copies of the changed components from one branch to another.  Assuming that developers are experts in their dedicated areas they can be trusted to make the merges themselves.  It will save a lot of config manager’s time, that can be used for something else, e.g. for reviewing the consistency of changes.  Again prompt communication is the key aspect here.

An example

The table on the left shows changes waiting to be assigned. Every change is associated with it’s area (by it’s own color). The cost is expressed in units of time. And there is the project that a given change belongs to.

The diagram on the right shows ordering of the changes in time withing each project. First the tasks are assigned to projects and then are globally ordered according to areas of expertise. There is one unit of time dedicated for the merge process (marked as m). The merges are triggered from Project I as shown by dotted lines. The symbol Ø represents idle windows of time, when no work is being performed in a given area.

The bottom line

Merging conflicts is costly, especially within a database project. It is worth to consider how tasks can be scheduled in order to avoid conflicts in a project. Without thinking ahead the costs of merging database changes could raise so high, that it will degraded the overall efficiency of teams working on parallel projects rendering the parallel strategy uneconomical. Experience is teaching that partially parallel projects with changes regarding the same area scheduled serially is much more economical even with some “idle” time. And the available time is not lost as it can be used for other important tasks like trainings.

International Conference, Lima 2012

We would like to extend our sincerely thanks to all speakers, participants and the persons from SCMupport and from Colegio de Ingenieros del Peru who prepared the International Conference in Lima that took place in May 19, 2012.

The picture shows the speakers and some of the people that worked to make it real: from left to right, Elba Manrique (our friend and supporter), Mg.Robert Berlinski (Specialist IT), Dr.Jorge Yrivarren (actual Chief of RENIEC), Mg.Marco Sotelo (BCRP Specialist IT), PMP Mercedes Gavilán (Organizer), Dr. Germain Pinedo (Moderator)

Thank you!

Software Configuration Management before and today

It used to be that cards served to handle code in the old days of computer “dinosaurs”.  And not the type of medium is important here, but the point that programmer delivered it personally hand-to-hand. Later, in time of floppy disks the process haven’t changed much.  One programmer works on one program and then delivers it on a floppy. It was simple to ask and find answers to the questions who and when will deliver a new version of a program. And with only one computer available there wasn’t any separate environment for tests. The process was think-code-test-think again and repeat until the computer calculated it right. Of course the  expectations from computer owners were different too.  Nobody expected to calculate millions of customers’ transactions over one night or served a database for an Internet store with thousands of products and millions of clicks working 24/7.

But the technology and the business expectations changed over time and we need to deal with that. Now the business requires reliable production system to work 24/7 and serve multiple concurrent users. For the reason the business is ready to pay the test team and pay for hardware, licenses, etc. And the business requires complex, heterogeneous systems that no longer can be developed by one programmer.  There are teams developing complex systems and then improving the systems.

The one think that doesn’t change are the questions that business ask: what and when you can give us and how much it will cost? And the business would like to hear that it will be available just the same day and will cost less, the best next to zero.

The expectations place the developers managers in an uncomfortable situation. Some might think, that the solution is to hire different developers who will do the same and more job for less. But it is walking on thin ice. The quality of good work must cost and there is no doubt about it. What usually is possible to improve are the procedures.

Since there is a long way from developers to the production there are many places for improvements. And that is the area where Configuration Management is especially significant today. Configuration Management is not only to control how the changes are coded, tested, fixed and delivered to Production. Good Configuration Management helps  developers to work in teams more efficiency and then smoothly deliver their work through all phases  into production.

The Configuration Management have some important functions today.

The first is to control who is making what change and when the change was committed into code, then control when the change got into tests, when it was tested and then when delivered  to pilot and to production. All this helps to manage the changes and helps business to plan the time for tests and confirm delivery. And should something go wrong, it will give a chance to speak with people working on the change and learn lessen and then improve to avoid problems in the future.

And there is much more today about controlling the changes.  To plan and know that all is according to schedule is not enough.  Controlling means to actually be possible to change what will be delivered to test and then to pilot and production. The world is changing and it is not unusual, that Business might ask to change the priority of changes,  maybe drop one change to make place for another more important. And the Configuration Management should be flexible enough to allow that kind of changes and should be efficient to make it for cost acceptable for the business.

For the flexibility and reasonable costs we need adequate structures and procedures. One path of workflow is not sufficient  to support flexible Configuration Management system. It takes many parallel paths that can be implemented by parallel branches in revision control system. Multiple parallel branches makes it possible to move changes between them when it is required by the business.  And the move operations is not free, it will take time and will cost. But sometimes the business is willing to pay the cost.

Another very important aspect of modern Configuration Management is the team work. Good Configuration Management not only makes it possible but makes it efficient.  Efficient in assigning tasks to developers and then coding, unit-testing and committing changes. But it also must be efficient in resolving conflicts when two developers changed the same file. And it must allow more complicated operations like withdrawing changes or retrofitting changes to other branches.

There are many elements for the Configuration Management process to consider.  And somebody could ask what is the best way to make it working? Well, the one way that is recommended it to apply the trunk and branches structure for revision control system and then carefully follow the procedures…

CMToolBox RC1 Trial

The SCMSupport.com published the CMToolBox plugin for PL/SQL Developer on April 24, 2012. It is a public and free RC1 Trial version available from http://www.scmsupport.com/cmtoolbox/index.html.

The CMToolBox plugin allows to automatically save objects from database to local SVN branch and intermediates between SVN server and the local SVN directories in order to provide SVN operations: update, commit, browse, diff, status, log and add.

Functionality:

  • Supports multiple branches/databases representing multiple versions and projects.
  • Provides automated option to saves database objects to local SVN directories.
  • Incorporates easy access to commit, update, browse, diff, status log and add functions by TortoiseSVN.
  • Handles: FUNCTION, PROCEDURE, TRIGGER, PACKAGE, PACKAGE BODY, TYPE, TYPE BODY and VIEW types.
  • Automatically concatenates PACKAGE with PACKAGE BODY and TYPE with TYPE BODY respectively when saving database objects.
  • Optionally reduces diacritic characters to ASCII equivalents for compatibility with UTF-8.
  • Straightforward installation, configuration and easy to use pop-up menu and toolbar icons.

Here is a demo from YouTube:

 

Database test models

It will be nice to have automated continues tests for a database. Unfortunately testing a database is not as simple as testing an applications. Let’s consider there different models for testing a database.

Directly calling database procedures/functions and verifying responses.

This model is suitable for unit tests as it allows direct access to database objects. On the other hand testing complex use cases might require a series of calls that makes it more difficult.

Calling business application that is connected to database and verifying responses.

This model is suitable for testing business functionality through the business applications. But it is not suitable for unit tests or testing asynchronies processes as the responses might not always be available from the application or might not include all expected details about how the database performed a request.

Calling business application that is connected to database and verifying database changes/events.

It is the most complex model. It uses business application to request calls to the database and it assumes that the database has an agent that detects specific database events. The events are stored in event log and based on that a database monitor can process the events and forward it to the test engine for matching. The model is suitable for both low-level unit tests of elementary database objects as well as for testing complex business functions including asynchronies processes. On the other hand the model requires additional resources to run the agent, log and database monitor. And it requires more configuration to define and detect database events and additional configuration to match calls with events that might arrive asynchronously. One technique that helps to simplified the matching process might be adding distinguish identifier to each test if the application allows it and then storing the identifier with each change within the database. A practical example of an identifier could an USER id. To run a test in this model it will required a configuration of series of tests, and each test must include four configuration elements:

  • Definition of a call to application.
  • List of events that the Database Event Agent should watch for and then store to Event Log (both positive and negative).
  • List of events that Database Monitor should forward to the Test Engine.
  • Matching rules for the Test Engine that will define success and fail results.

The model might be suitable for continues integration of a complex database systems.