Wednesday, February 27, 2013

Possible Outcomes for Component Replacement

Discuss why savings in cost from reusing existing software is not simply proportional to the size of the components that are used. What other factors affect the cost?

Principles of Component Independence and Possible Outcomes for Component Replacement

There is a general agreement in the software engineering industry that a component is an independent software unit which can be composed with other independent units in order to create a software system (Sommerville, 1989). Another commonly accepted definition is that a component can be independently deployed and composed without modification according to a composition standard (Councill and Heineman, 2001).

In any system, software or hardware, in order to determine its reliability it is important to firstly establish component independence.
This is typically achieved through independent component analysis (ICA), which is a computational technique for revealing hidden factors that underlie sets of measurements or signals (Oja, 2001). The two most commonly used definitions which interpret component independence are minimization of mutual information and maximization of non-Gaussianity.

ICA is important because independence of components is a fundamental requirement for calculating system reliability (Woit, 1998) and can, to some extent, predict and prevent the possibility of system failure. Component-based system need to evolve over time in order to prevent system failure and to add new functionalities.
This evolution is typically controlled through the usage of components, which are the units of change. When one component becomes redundant, it is replaced with another, which also adheres to a standard of independence, but it is implemented in a different way.

The concept of component replacement relies on removing a component which no longer functions properly or that no longer serves the purposes for which it was initially implemented with another component that can fix the error brought on by the initial component or that can add a new function required by the system’s natural process of evolution.

However, although component replacement is practiced as a means to avoid system failure, this practice can have the very adverse effect.
As stated before, reliability of the system is strongly connected to the independence of the components that form that specific system.

Replacing an old component with a new one requires a thorough and complete analysis of the new component, both isolated and in combination with the other components forming the system.

The reliability of a system depends on how its architecture and the component interfaces can coexist in equilibrium. The introduction of a new component, with its own specific interface, might facilitate some kinds of system architecture while precluding others (Nejmeh, 1989).
In order to foresee these events, the interface of a new component has to be discovered and analyzed prior to its introduction in the system (Brown, 1996).
However, current engineering practices and techniques do not allow for a complete analysis of a component’s interface meaning that, even though a certain component may seem like a perfect match when considered alone, it can lead to system failure once implemented.

The principle of component independence, which serves as a foundation for any component-based system implies that one independent component can be replaced with another independent component, which is implemented in a different way, but that manages to ensure the coherency of the system.
However, although the independence of components is a means to ensure system reliability, such a replacement can ultimately lead to system failure, mainly because current engineering practices do not allow for an accurate analysis and evaluation on how a certain component, which has not yet been implemented, will act once integrated in the system.

  • Brown, A.W. (1996). Engineering of component-based systems. Engineering of Complex Computer Systems. Second IEEE International Conference. P414-422
  • Councill, W.T and Heineman, G.T.(2001)  “Component-Based Software Engineering as a Unique Engineering Discipline”, Chapter 37 in G. T. Heine¬man and W. T. Councill, Editors, Component-Based Software Engineering: Putting the Pieces Together, Addison-Wesley, Boston, MA, pp. 675-964.
  • Nejmeh, B (1989). Characteristics of Integrable Tools. Technical Report, Software Productivity Consortium
  • Oja, E . (2001). Independent Component Analysis. Helsinki University of Technology.
  • Sommerville, I (1989). Software Engineering . 3rd ed. Edinburgh : Pearson Education Limited . 405-430
  • Woit, D.M. . (1998). Software component independence. High-Assurance Systems Engineering Symposium, 1998. Proceedings. Third IEEE International. 3 (1), p74-81

Elad Shalom,
CTO at

Phishing for Romance

Phishing is a type of online fraud that tries to trick you into revealing personal financial information, passwords, credit card numbers, etc. In most cases, phishing takes the form of an e-mail message claiming to come from a bank, credit card company, online retailer or some other legitimate source. Take the SonicWALL Phishing and Spam IQ Quiz (available at

Phishing for Romance

Phishing is a form of social engineering in which an attacker, also known as a phisher, attempts to deceptively repossess genuine users’ private or sensitive qualifications by mimicking electronic connections from a trustworthy or public organization in an automated fashion (Jakobsson, 2007). Phishing techniques circumvent an organization or individual’s security measure. It nullifies any firewalls, authentication software, and encryption due to the fact that most “phisers” nowadays uses social engineering to entice possible targets.
Attackers can use different methods in phishing which can vary from a simple phone phishing to website forgery.

The most commonly used method is via e-mail. Attackers can send large amount of e-mails by the use of bot-nets or zombie nets. They will inbox a number of fraudulent e-mails which has links to direct them to a phishing website.
Modified versions of this method is been seen throughout the years and the profile of possible targets also changes. One version is the so-called Romance Scam. Victims would receive an email from an individual stating that he/she saw your account on a social network and proclaims his/her “love”.

Men and women in their mid 40s to 70s with a status of separated or windowed are most likely the target of this scam. Once contact is made, the primary goal of the “lover” is to create a rapport with the victim. He/she would tell the victim that he’s/she’s an engineer for a company with a young daughter and currently based in London or California.

The “lover” will send countless love poems or letters, which are likely copy pasted, to the victim justifying his/her eternal love. Once the victim is groomed, the ‘lover” would pitch in that he/she wants to marry the victim and promises to send money to buy their dream house.

Victims will be subjected to an ecstatic feeling of joy thus overriding their common senses.
Now that the victim is “hooked”, the scam artist will create a story on how the victim can receive the promised money.
They will be instructed that an e-mail coming from a bank containing a transaction slip is needed to be process and signed by the victim.

The transaction slip is almost a true copy of the real one but with some modifications. Some of which contains a part where the victim needs to indicate the CVN/PIN number of her/his credit card or bank account. Signature of the victim is also needed to be indicated and once receive by the scam artist, the account or credit card will be use fraudulently to purchase items.

This is a good example that emotions can contribute in the success of fraudulent activities. We are just human to commit errors but it’s not a reason not to be vigilant. In preventing, users should practice better judgment and not fall to false pretenses.

The technically savvy should not dismiss the facts that technology is also a factor. The lack of information or outdated information greatly contributes to this issue. Developers must go beyond blaming users if they expect to deploy effective countermeasures against phishing attacks (Hong, 2012).

 Tell Tale Signs of a Romance Scam
  • Indication that your profile was seen on a social website
  • Attackers proclaim their “love” the minute you answer their e-mails
  • The usage of an appealing intro like an engineer for a petroleum company, widowed architect, a businessman traveling from country to country. Followed by the heartwarming indication that his/her spouse has died in an accident leaving a young daughter.
  • Asking about personal information regarding bank accounts, credit cards and other monetary information
  • Asking for monetary assistance for certain circumstances like being held in the airport by customs officials, certain tax needed to be paid for a luxury item
  • Promising ridiculous amounts of money to the victim
  • When chatting with the scammer, his accent is clearly not of his said birthplace

  • Jakobsson, M. and Myers S. (2007) “Phishing and Counter measures: To Understand the Rising Dilemma of Electronic Individuality Theft”: John Wiley & Sons Inc.
  • Hong, J. (2012) “The State of Phishing Attacks” Communications of the ACM, Vol. 55 No. 1, Pages 74-81

Elad Shalom,
CTO at

Reuse in Component Based Software Engineering

Discuss why savings in cost from reusing existing software is not simply proportional to the size of the components that are used. What other factors affect the cost?

Reuse in Component Based Software Engineering

Software reuse is the process through which an organization designates a set of operating procedures to specify, produce, classify, retrieve, and adapt software components with the purpose of using them in development activities (Parnas, 1994). One of the main reasons organizations have adopted component based software engineering (CBSE), a system which has highly reusable qualities, in their software development process is the reduction in development costs and increase in productivity.
Software reuse represents reusing an asset, or a component, in a different system that the one in which it was initially used (Frakes and Fox, 1995). The term software reuse might be, at first glance, somewhat misleading, but it is by no means something that can be achieved free of cost (Lim, 1994). Software reuse is a long-term investment, which can, apart from reduce cost, also increase the productivity, quality, and reliability (Haddad, Ross, and Kaensaksiri, 2010) of component based software.

Software reuse requires an array of resources needed to set up a reuse library, reuse tools, and reusable products, which will represent the foundation for future reuse projects. In some cases, software reuse may be an investment that is not worth the benefits it offers to a certain organization; there are situations when building new software with no reused assets is significantly less costly than reusing assets. In order to decide if software reuse is a feasible approach and to determine the exact cost of such an operation, each organization should undergo an accurate cost analysis.

There are numerous factors which affect the cost of software reuse. Initially, an organization has to properly describe the software or product which is being developed in order to identify its requirements. This will allow the developer to search for already existing assets which could benefit the new software. In this stage of the process, costs will relate to trials, verifications, and acquisition of assets or components.

Subsequently, the developer will be required to invest in modifying the acquired assets in order for them to perfectly suit the new software system. Some components may require significant modifications, which could result in higher costs and more effort involved than creating a component from scratch would need (Boehm, Abts, and Chulani, 2000).

New software cannot be build entirely on already-existing assets, so any software reuse process will require investments in the development of new components as well.  Additionally, there will also be costs relating to the integration and testing of old and new assets in order to see how they work together. Moreover, before the new software can be ready for launch, additional money will have to be invested in infrastructure.

The exact cost of software reuse cannot be pinpointed precisely in most cases, but, usually, an accurate analysis will offer a rough estimate which will enable the developer to decide whether it is more advantageous to reuse components or to design entirely new software.

In conclusion, there are numerous factors that affect the cost of software reuse programs and each of them has to be taken into account when deciding whether reuse is the best approach an organization could use in the development of new software. Software reuse is much more complex than simply taking old assets and coming up with a new product and, consequently, its costs do not depend solely on the size of the reused components.

  • Boehm, B., Abts, C., and Chulani, S. (2000). Software development cost estimation approaches - A survey. Annals of Software Engineering,  10(1-4). Springer, Netherlands, November 2000
  • Frakes, W.B and Fox, C.J.  (1995). Sixteen Questions about Software Reuse. CACM. 38 (6), p75-87
  • Haddad, H, Ross, N., and Kaensaksiri W. (2010). Software Reuse Cost Factors. Department of Computer Science, Kennesaw State University, GA, USA
  • Lim, W. C. (1994). Effects of reuse on quality, productivity, and economics.  IEEE Software,  11(5), p23-30
  • Parnas, D. L. (1994). Software Reuse and Component Based Software Engineering. 16th International Conference Software Engineering.  

Elad Shalom,
CTO at

Cyberspace Censorship or Lawlessness

For this discussion, we will talk about “freedom of speech in cyberspace”. Please let us know any of the recent events (one event) from the news that illustrate a positive or negative implication of the impact of the Internet on the actual protection of the freedom of speech. What is your opinion on the event?

Cyberspace: Censorship or Lawlessness

Freedom of speech is the right to express opinion without censorship or restraint. Freedom of speech in cyberspace is a highly debated topic in the advent of the Internet. Cyberspace is a lawless zone where the weak are prey to the strong.

Due to the surge of fraudulent and unscrupulous entities attacking websites and sensitive data theft, governments are taking action in censoring/limiting the usage or access to certain sites.
In revolt to these actions, citizens of the Internet made uproars in response to this. Social media site users retorted by using black backgrounds as profile pictures, creation of hate pages, and viral commentaries.

Radio stations where flooded by calls from angry Internet citizens whom want their voice to be heard on air. The infamous Anonymous group took down government sites as a warning to all government entities. These are just examples of how the people responded.

What kind of universality would it be if censorship was to rule the Internet and what would universal access mean if it were access to only some information, only some ideas, only some images, only some knowledge?(Matsuura, 2005). Yes it is a fact that the purpose of the Internet is to provide universal access to all. What will we do if that purpose is blatantly abuse to inflict harm socially, mentally, or sometimes physically?

Cyberspace has evolved into an entity that benefits man daily. May it be in a form of commerce, education, media, or entertainment, cyberspace will always be a part of our daily lives. One key element that the Internet provides is the ease of communication.
You can chat, you call and you can even send sms/mms messages by means of the Internet. This also gave the rise to social media which became a connecting medium to socialize.

There’s a vast functionality that the internet provides. It can take days or weeks to enumerate each one. In summary, cyberspace is an inexhaustible tool that is free to use by everyone. The Internet has frequently played an important part in such engagements by enabling community to join and trade information instantaneously and by creating a sense of harmony. (La Rue, 2011).

There are two sides in everything including cyberspace. As the years passed by, the evolution of the information highway sprung some factors and features that is use predominantly by destructive entities. One to mention is of pornography.
Nowadays, pornography has become a tradable market which the yearly revenue amounts to billions.
This abecame a breeding ground for pedophiles and child molesters. Cyberspace also became the output of ‘hacking activities” in which they prey upon the average person. These are just some of the ugly truth that we seldom neglect to take action.

Knowledge is power as the saying goes but with unethical usage, this virtual land becomes a lawless society that anyone can be a victim. Yes, it is true that freedom of speech should not be limited or censored but we should also keep in mind that this society is governed by ethics.
We should not limit or censor the virtual landscape, what we should do is applying good judgment upon using the Internet.
We have the capacity to do that. We sometimes lack is the focus and the sense of responsibility in doing the right thing.

  • Matsuura, K (2005). International Conference on Freedom of Expression in Cyberspace Paris, France. February 3-5, 2005 (UNESCO)
  • Freedom and security in cyberspace: Last accessed on February 27,2012
  • La Rue, F (2011). Freedom of expression everywhere, including in cyberspace Nov. 4, 2011

Elad Shalom,
CTO at

Tuesday, February 26, 2013

Goals And Techniques Of Process Analysis

Discuss the goals and techniques of Process Analysis

Process analysis involves the series of events that result in an achievement. It tells you how this series of events occurred. Process analysis is of two types, informational and directional. Informational analysis asks the question,” How is this done?”
This analysis tells you how a certain thing was done or achieved. Directional analysis, on the other hand, asks the question, “How can you do this?” Here you examine how you can do a certain thing so that the process can be repeated. Directional analysis gives directions to a certain process.

The purpose of performing a process analysis is to understand how to do a certain thing or how it works, to ascertain how effective a process is or to argue about its significance. The goals of performing a process analysis are, to evaluate completeness, to identify the factors that make maps difficult to use, to isolate bottlenecks, to measure process time, to find redundancies, and to examine resource allocations.

While analyzing a process, you ensure that it is performing properly and giving maximum productivity with minimum bottlenecks. Process Mapping is the first step in process analysis, which involves creating a visual presentation of the entire process.

Once it is mapped, the process is methodically analyzed to identify the bottlenecks or the constraints that hinder the flow of the process (Belize 2011). According to the Theory of Constraints given by Goldratt in 1986, the main focus is to identify the bottleneck first, and then to ensure that the complete process is functioning at a speed to equal the bottleneck.

Philip Ullah and Mike Robinson suggest one particular technique of process analysis, the “Value-added Analysis”, which is performed at each step of the process. In this analysis, each step is categorized into one of three categories.
First category is the “step adds real value”, second is “step adds business value”, and the last is the “step adds no value”.
Once all the steps within the process are put into their categories, the next step of value-added analysis is to speed up the steps that fall into the first category, or those that add real value to the output of the process.
Then the business value steps are minimized or eliminated and the no-value steps are entirely eliminated. This is done through automation and process re-designs (Ullah & Robson 1996).

Another common technique used for process analysis is the “Cycle time analysis”. In this technique, distinct maximum and minimum processing times are allocated to each process step. Delay and lag times are also measured for each step.
This technique usually reveals that out of the entire process time only 5 or 10% is the only actual work time. Such findings help you to recognize areas that need improvement and suggest measures to make these improvements for the future. Cycle time can be reduced by electronic work flow and centralized data stores.

Other than these two techniques, there are other techniques that can be used for process analysis. The important ones are gap analysis, root-cause analysis, examining experience, and observation. Other common techniques are customer requirement analysis, Pareto analysis, Matrices analysis, supplier feedback, role playing, and so on (Long 2012).

The process scrutiny stage is frequently the stage that is not given much attention because of various reasons. Nevertheless, it is also the stage that is most likely to bring about the highest Return on Investment that all other phases.

  • Belize, D. (2011). Process Analysis Tools and Techniques. Available: Last accessed 6th February 2013.
  • Robson, M. and Ullah, P. (1996). A Practical Guide to Business Process Re-engineering. England: Gower Publishing Ltd.
  • Long,K.A.(2012). Outline of Common Procedure Analysis Techniques, Business Rules Journal, 13 (12).  Available: Last Accessed 6th February 2013. 

Elad Shalom,
CTO at

Monday, February 25, 2013

Process Metrics Effectiveness

Choose a process metrics to discuss. How effective would it be to help process improvement? How easy is it to use?

Process metrics are used to measure the components of processes which are used to obtain software. (Singh et al. 2011) Process metrics are measurements that deal with a specific project or program and enable organizations to obtain, evaluate, and communicate excellent results of process improvement.

Time is the key element that effect process metrics, as it plays a big role in the quantitative analysis of projects. Therefore, comparing the time delta between proposed time and increased time is a significant component for project completions. Hence, process metrics are very useful to assess the improvement of a process (Zelkowitz, 2009).

Process metrics are the means for the software developmental project to be sustained throughout its life cycle. These metrics are collected in order to form strategic decisions about long-term process improvements. There are several process metrics and some of them are related to and dependent on other processes. One such process metrics is the Fault or Error reporting metric, which we will be discussing here.

Fault or Error Reporting Metric

The end goal of any software development process is to come up with a software system that meets the requirements of the business, while being done on time within the financial budget, and is easy to manage with enhancements and changes (Kan 2002).

 Source: GQM Paradigm -

The GQM or Goal Question Metric Model is an easy way to ensure that the metrics that are collected are closely related to the business goals (Cammarano 2007). The software development process is a key process specially in QA testing. Fault or Error reporting metric can greatly impact this process. This particular metric analyzes average time the development team spends on correction of error. Most software projects are carried on under strict time limits, hence, it is essential that modules are developed on time and are as free of errors as possible. However, due to interactions between the individual modules with the larger application, developing a completely error-free module is not always possible. Another issue is the error reporting structure.  A specific module may function as designed, but given for testing, it is combined with the larger application. Thus during testing, the entire application is tested and hence individual errors in the specific modules may be difficult to recognize.

Fault/Error Reporting Metrics and Process Improvement

It has been reported that using this error reporting metric can have a positive effect on the entire software development process. Some important areas of the process that can be determined using this metric are average resolution time, average amount of errors per module, error impact critical, and potential enhancement opportunities. This helps in improvement the process development directly. It is challenging to make an application stable and fault tolerant, based on components like supporting applications, hardware infrastructure, etc. Hence, fault/error reporting metrics can definitely be effecting in helping process improvement. It also helps in understanding the financial impact and impact on resources that can result in the project being completed as per the specified deadline.

How easy is it to use?

There are certain difficulties faced when using error reporting metric. These are identification and ease of usage. It is very easy to record errors with this, but the subsequent phases of analyzing and identifying the system errors pose a problem. There have been cases when users have wrongly reported a warning as an error, which has lead to negative impact on investigation timing.


Process is always done either to help in process improvement or to evaluate if the success criteria of a project is achieved. Once you are clear about the strategy you are going to use for improvement, you can choose and collect the suitable metrics. It is only by collecting metrics that a project or organization’s success and performance can be validated. Metrics are the proof, without them, it is only guess work. Process identification, analysis, measurement and change are critical factors for any software development process.

  • Singh, G., Singh, D. and Singh, V. (2011). A Study of Software Metrics. IJCEM International Journal of Computational Engineering & Management, Vol. 11.
  • Zelkowitz, M.V. (2009). Process Improvement. Available: Last Accessed 11th February 2013.
  • Cammarano, R. (2008). Goal Question Metric (GQM) Model. Available: Last Accessed 11th February 2013.

Elad Shalom,
CTO at

Overlooked Risks In Software Development

What software risks are most commonly overlooked or not managed well within your organization? What mitigation strategies would you recommend to lessen the severity of the risks? Do you have any suggestions for how these risks may be prevented/ avoided in the future?

Risk management is vital for software development projects. Software risk management is needed during project execution for control purposes and project planning. It helps to reduce the chances of project failure. The first step in software project risk management is to identify a set of risks and record then in a checklist (Arnuphaptrairong 2011).

Installing mapping system and other large software projects have proved beneficial to organizations. However, the risks involved in such an implementation are huge. The threat of a risk establishes the need for a systematic and aggressive risk management process to make sure the project is a success. The risk management process should address three main risks linked to implementations of big systems- organizational, business, and technical risks. Most people know all about the technical risk and are familiar with how to manage it (Campbell 2001).

Often the other two risks, organizational and business risks are overlooked. However, for the software projects to be successful these two risks must also be measured and controlled. Organizational risk tries to measure the possibility that the client or user will not take advantage of the complete potential of the system.

Resistance to change and insufficient user preparation are the reasons that this could happen. Organizational risk is specific to the organization that develops the project (Nielsen 2009). 
On the other hand, business risk will determine the chances of the newly implemented system failing to result in financial benefits and productivity that have more value that the cost incurred in achieving them in the first place.
This failure could be because of a number of factors. Generally the main cause is that there is no alignment between functions fixed in to the system and the priorities and business strategies of the company.

Changing management initiative, although powerful, can only go so far to mitigate organizational and business risks. We need stronger initiatives that not only involve preparatory training , but also various elements specially designed to make sure that the new system is fully incorporated into the daily operations of all the particular departments, in the specified duration of time.

Thus, operations integration must be performed to lessen the severity of these two risks. The process of operations integration comprises of the information and approaches needed to make sure that the new system delivers the target result within the specified time and budget.
  • In order for this mitigation strategy to succeed, all departments to use the new system must follow the practices in a disciplined manner. Some of the actions that are included in this strategy are as follows:Establish clearly which employees are going to use the new system.
  • Communicate clearly the new system’s corporate goals.
  • Accurately aligning the work processes of the company with the business processes fixed into the new system.
  • Recording the aligned processes in the organization’s manual of Policies and Work Procedures.
  • Providing training sessions for the users so they can understand the work functions of the new system.
  • Developing disincentives and incentives for incorporating the new system into the organization.
It is up to the top management to lead strongly and emphasize the importance of starting operations integration at the very beginning of the project. Consequently, the management needs to make sure that the operations integration process is not led and carried out by the IT department, but rather by the senior manager of the user teams that will be working with the new system.

It is a fact that the development of risk is inevitable. Therefore, it has to be accepted and managed in any software project. However, we must remember that every project will need to manage three risk components- organizational, business, and technical. None of these should be overlooked in order to successfully and efficiently implement a new system.
  • Arnuphaptrairong, T. (2011). Top Ten Lists of Software Project Risks: Evidence from the Literature Survey. Proceedings of the International MultiConference of Engineers and Computer Scientists, Vol. 1. Available: Last Accessed 22nd February 2013.
  • Campbell, M. (2001). The Two Overlooked Aspects of IT Risk Management. Available: Last Accessed 22nd February 2013.
  • Nielsen, D. (2009). Identifying Risks to Software Projects. Available: Last Accessed 22nd February 2013.

Elad Shalom,
CTO at

Legacy System And Client-Server System

Some of you have lived through the conversion of a legacy system into a client-server or distributed system. Others have tangentially been involved, while others may have just heard about it. With the information you have read in this lecture and the text, and/or your own experiences, discuss some possible problems that might arise in the conversion.

Software and applications that function according to old technology even though there are newer technologies available are known as legacy systems. Because the costs of replacement are usually high, not to forget the efforts, legacy systems generally are left as they are (Burke 2011).Thus, in most cases bequest system is utilized for the reason that the system where effort and time is needed in understanding it in order to change it and put a better and newer technology in its place. 

The incentive behind developing a web browser system by replacing the bequest system is to use a sole client for every platform. Here a single set of cods will be kept and utilized by the platforms (Dossick & Kraiser 1996). One other factor that pushes for conversion is because replacement and change of such a system will aim at specific users and situations, and changing legacy system in to a thin client generally results in a significant decrease in the needed training cost. The changeover between applications will also be much simpler for the user.

Possible Problems that may arise in the conversion

When organizations decide to incorporate legacy systems with newer technology, the biggest challenges faced are due to the new hardware and software. Even if they find a way out to bring about such integration, it usually involves lots of money and effort just so that the organization can create and continue the entire process of integration. For instance, most computer systems function on flat-files and incorporating a web portal interface with such systems adds an additional burden to the dependability of the new system as it needs extra effort (Rea 2011). The solution to this is for the organization to create a new system in order to leverage the legacy system data on the network. This is done by developing a network hosting system and a front-end function.

The fact that it costs a lot to maintain and operate, is one of the major issues of legacy system. Costs are also incurred in making sure that the IT team continues to work using bequest system for a elongated period of time. This is definitely a huge responsibility for an organization. One of the common options that organizations choose is they replace the legacy system and use an already established package or internal built-in system that can fulfill the requirements of the business, and thus improve business processes.

Conversion of legacy system to a client-server or distributed system is all about retaining and extending the investment cost of the system by migrating to new and latest platforms (Good 2002). The conversion process adds on new abilities to the latest system and lowers operational costs by implementation other technologies like web services.
According to Burke (2011), the support and maintenance of bequest system is a difficult task because the changes in technology form a bigger barrier for the functioning of such systems. Ultimately organizations are on the lookout for current and updated systems and technologies to replace the older systems.

In the end, one major problem that might occur during conversion is that the new client-server systems (fat or thin clients) may be created with documentation management services and flat-files rather than with rational record.

  • Dossick, S. and Kaiser, G. Dossick, S (1996).  WWW Access to Legacy Client/Server Applications. Available: Last Accessed 21st February 2013.
  • Burke, A. (2011). Definition of Legacy System. Available: Last Accessed 21st February 2013.
  • Rea, W. (2011). Problems with Legacy Systems. Available: Last Accessed 21st February 2013.
  • Good, D. (2002). Legacy Transformation. Available:

Elad Shalom,
CTO at

Where Not To Use SOA

Giving reasons for your answer, suggest two types of applications where you would not recommend the use of service-oriented architecture and why.

SOA or Service Oriented Architecture is an architectural concept where components of systems depict data and functionality in the form of services. These services are accessible by various other components with the help of certain standard-based technologies (Thomson 2008).

With SOA one can create new applications by mix-and-match. The first step is to choose on the application needed, next identify the present components that can aid in building up the application, and lastly mix them all together (Gralla 2003).

Although SOA seems to be increasingly popular in the present day, it is not a new concept at all. It exists since the 1980s. However, the idea didn’t take root as there was no application programming interface or standard middleware that could enable it to do so. With the development of Web services, SOA has resurfaced. The underlying architecture of Web services fit together perfectly with the SOA approach. It has even been said that SOA is the key to the future if Web services (Gralla 2003).
The value of SOAs in certain situations cannot be denied. However in circumstances when IT environment is homogenous or is not really expecting any change, then SOA is not recommended (Bloomberg 2004).

If your organization only has a single vendor delivering technologies, then SOA will not be very cost-effective. For e.g. say the main purpose of your IT infrastructure is to run a website, it is not a very widespread purpose. You may have a database, two Web servers, and an application server. An additional SOA will not add significant value to your IT investment. Generally, smaller companies utilize a small homogenous network hidden behind their firewall. For such companies and those that implement single vendor technology an SOA is not a very practical addition to their basic infrastructure (Bloomberg 2004). 

Another application where Service Oriented Architecture is not recommended when there is not expected change in your IT infrastructure. This situation is says “don’t spoil something that’s already working”. You have an old legacy system sitting in the corner for year’s together, gathering dust, and there is reason to believe that things are going to change in the future. Then why mess with it? This theory applies to big and small companies and organizations. Even I can say that I have computers that are more than six years old, in perfect condition and running Windows 98. Some of you may even have one with Windows 2000 approaching its fifth anniversary.

The point is that these systems, however old, may be doing a good job at what they were doing so many years ago, as long as you don’t fiddle with them. This is not only true about the systems but also the applications functioning on those systems.
Practically speaking, if there is hardly any reason to bring changes into the business logic, data flow, process, presentation, or other aspects of the application, then changing these relics to SOA may not really be worth all the effort to do so.
In the end, new approached do add supplementing value, but they never really replace those approaches that have existed for a lifetime.
Hence, it is very important to understand when to implement these new approaches and when not to.

  • Bloomberg, J. (2004). When Not To Use an SOA. Available: Last Accessed 21st February 2013.
  • Gralla, P. (2003). What Is Service-Oriented Architecture? Available: Last Accessed 21st February 2013.
  • Thomson, D. 2008. Application of Service-Oriented Architecture to Distributed Simulation. Available: Last Accessed 21st February 2013.

Elad Shalom,
CTO at

Risk Management Deliverers

List two or three risk management deliverers that you have seen or produced during a past or current project (such as: risk list, risk status report, risk management plan, expected value report, risk monitoring report, risk response form, etc.).

Describe their effectiveness.                                                        
When during the project life cycle were they developed?
How much training did those involved in the related processes receive?

Risks are part of every project. For a project to be successful, the key is not to avoid risks, but to know and understand them. A risk is the probability of occurrence of a condition or event that would negatively affect the project development process. Risk management involves identifying, understanding, and managing known risks so that the possibility of fulfilling the project objectives is increased.

The reality and challenges that are faced while applying software risk management processes are a problem, especially when it comes to incorporating the risk management process to the software development organization. In spite of all these difficulties, using risk management techniques and tools in project development processes is very beneficial (Kwak & Stoddard 2004).

Risk definition starts with identification of risk. This step helps in recognizing the probable losses and their reasons. To implement efficient risk management process, the project members must have an overall perspective about the software developmental project. Risk assessment is done to establish the chances for potential loss occurring if the risk actually materializes (Jones 1994).
The next step is the mitigation step which involves development of a risk avoidance plan; following which is the last step responsible for execution of the both the risk mitigation and risk avoidance plans. These steps pave the way for a thorough description of all the risks. The risks are all documented in a Risk List.

The list must have all the risks including definition, likelihood, consequence, indicators, risk ranking, contingency plan and mitigation strategy (Boban et al. 2003). Creating a risk database does not necessarily involve technology. You can even use index cards, although it would mean that functions like searching, sorting, and linking would become a challenge and may lead to errors. Risk lists may be implemented effectively using Microsoft Excel or even Microsoft Word. We were able to implement it effectively using Microsoft Project. 

Another risk management deliverable we used is Risk Status Reporting. This should function at two levels-external and internal. In case of IT operations, it operates at the internal level, and here risk status reports must consider four probable risk management situations for every risk. The four possible situations are resolution, contingency, valiance, and changeability.  Risk reporting includes recording, collecting, and reporting various risk assessments.
It is important to monitor the results and assess the competence of existing plans. Risk reporting help in providing a foundation for assessing the project updates. As risk reports are formal records they ensure that the risk assessments are comprehensive. Although it requires continuous planning and supervision, this approach can enable the risks to be alleviated in the beginning phases of software development when costs for such software projects are still low.

We developed these risk management deliverables after we formed the objectives and activities of the project. The project life cycle includes a step where potential challenges are identified and a contingency plan is developed. This is where we conduct a risk assessment and reporting and use deliverables like risk lists and risk status reports.

For the implementation of the risk management to be successful, the organization defines management roles for the project. Specific project members must be appointed whose foremost activities are related to risk management of the software development project. It is their responsibility to constantly identify risks and activities related to risks.
All the project stakeholders share the responsibility for risk management. However, the Project Direction is the one who decides whether to move forward with the mitigation strategies and implement contingency places. This is especially true for cases which have requirement of additional costs.

The solution to efficient risk management lie is the identification and mitigation of true risks and formation of a contingency strategy if the potential risk develops into a reality (Charlotte 1989).

  • Boban, M., Požgaj, Z. and Sertic, H. (2003). Strategies for Successful Software Development Risk Management. Management, 8 (2), p. 77-91.
  • Charette, R. (1989): Software Engineering Risk Analysis and Management. New York: McGraw Hill.Jones, C. (1994): Assessment and Control of Software Risk. New York: Prentice Hall.
  • Kwak, Y.H. and Stoddard, J. (2004). Project risk management: lessons learned from software development environment. Technovation, 24, p. 915-920

Elad Shalom,
CTO at