Wednesday, September 4, 2013

What is a Derived Attribute

What is a derived attribute? Give an example.

Derived attributes are those whose values are created from other attributes. 

These values are generated with the help of algorithms, calculations and other relevant procedures. The specifications established for calculating these derived attributes is a concern with respect to the processing aspects of the particular information system. These attributes may be integrated with the data model only in the condition that the attribute value calculation rules would be lost in the absence of the derived attributes.

Database designers preferring to maintain the elegance of their designs prefer to avoid storing the derived attributes in their databases. They try to execute these derived attributes through appropriate algorithms so that they are called only when a specific query for them is made. In this manner the design elegance of the database is maintained.

The most appropriate example of the usage of derived attributes is the calculation of a person’s age using Julian dating system. Using this system the age of a person can be calculated by subtracting the date of birth of the employee from the current date and then dividing the result by 365. This can be understood better with the help of the following notation:

The main issue with non storage of derived attributes in the database is that large databases provide very slow queries when the values of the derived attributes are processed during the execution of the query.

Elad Shalom,
Senior Consultant at SwiftRadius
Co-Founder of Saint John Developer User Group

Action When MultiValued Attributes Encountered

What two courses of action are available to a designer when a multivalued attribute is encountered?

When a multivalued attribute is encountered, the designer has two alternatives which can be followed.
  1. The multivalued components can be split into its components and kept in the same entity. But the only condition with the usage of this approach is that only single entries are stored at each level. For example, CUSTOMER_TELEPHONE can be decomposed into CUST_HOMEPHN, CUST_MOBILE AND CUST_FAX_NUMBER. If the customer has more than one mobile number, then this structure will not be able to store more than a single value. Hence for each component, only a single value will be stored.
  1. The second approach involves the creation of a new entity which would comprise of the components of the multivalued components. This new entity can be linked to the entity in which the multivalued attributes were originally present. This method is most favorable and desirable when the total number of results in the multivalued attributes is unlimited. This holds significance for practical purposes also. The most appropriate example of this approach is the classification employees as “technical” which enables them to possess certifications in various areas as well as levels.

Elad Shalom,
Senior Consultant at SwiftRadius

Database Design and Normalization Principles

The “right” database design can rarely be decided on the basis of normalization principles alone. Do you agree or disagree with this assertion?

The Right Database and Normalization
A database is a collection of information formatted into a table, chart, or file. Data tables are generally collections of information inputted into columns, rows and fields.

Columns in each table can be selected through a primary sorting key and there may be unique keys to assist in data retrieval and input.
You may have columns that are fixed in length or vary depending on the type of data that is being input. At the same time, records can also be fixed or varied.
You can restrict column names and keep your column and table names case sensitive.
You can develop a database in any form you desire as long as it is "normal."

Normalization Characteristics
There are many way to construct a database which includes the rational database plus the principles of normalization. One example includes database normalization techniques constructed by mathematicians.

These types of data bases are difficult to understand and program unless you have a math background.
To make life easier on those who do not understand all the techniques in mathematical equations, normalized relationship databases can be summarized into these benefits:
  • Eliminating redundant data storage. Rather, not allowing the same data to be stored over and over again creating an infinite number of columns.  Data can be overwritten, but there will not be multiple entries through normalization.
  • Modeling of real world objects or entities and their relation to one another.
  • Structuring the data to enable a model to be flexible and adaptable. 
The "real" definition of normalization is the procedure of forming data, shaping it into workable tables and columns and providing data that is easy to manage.  If your data is normalized, there are no more redundancies or no more inputting the same data over and over.
  • Identify relations between attributes.
  • Combining the attributes to the relations of the forms.
  • Combine relations or attributes to form a complete database.
There are different form of databases and normalization, and it is good for database designers and programmers to understand what their specific form of normalization is. This will help in finding "broken" entries and taking the right actions to fix those broken entities.

Every database has normal attributes and designers need to define the attributes, group each related attribute into relations, select primary and candidate keys for every relation plus remove repeating groups. Functional dependencies must be identified and all transitive dependencies need to be identified.

Is Normalization Normal?
When all the theories have been listed, stated and argued the result is: is normalization normal?

Normalizing your data bases makes sense to the company, does provide great performance, prevents duplication, avoids synchronization problems, and allows programmers to write simpler activities and codes. Using set templates for developing databases provides ease of inputting and use.
Yet what is normal for one department is definitely not normal for another department. Normalization does not fix any problems; it may create more problems.

Measure the data you need and how you will input and retrieve the data contingent on the type of information you are inputting and retrieving. Let your normal be controlled, but also make provisions to customize your data base. 

A normalized database is great if you have template data to input and retrieved, but if you have complex data that needs to be retrieved in a specific manner, you need to customize and "denormalize" your database.
"As the old adage goes, normalize until it hurts, denormalize until it works" (Atwood, 2008).

Atwood, Jeff (2008). Coding Horror: Maybe Normalizing Isn't Normal. Available: Last accessed: 3 April 2013.
Marston, Tony (2004). Relations Data Model, Normalization and Effective Database Design. Available: Last accessed: 3 April 2013.
Melton, Beth (2009). Databases, Normalizing Access Data. Available: Last accessed: 3 April 2013.

Elad Shalom,
Senior Consultant at SwiftRadius

Wednesday, August 21, 2013

Top-Down and Bottom Up Processes

The process of starting with world objects and modeling using entity-relationship diagrams is referred to as a top-down process. Starting with one large table and functional dependencies using normalization is referred to as bottom-up development.

What are the advantages and disadvantages of each method? Are there any inherit dangers with either method? Which would you prefer to use? Is it really an either-or situation?

Top-Down and Bottom Up Processes

Top Down
Top-Down is deductive reasoning. It can be used in conjunctions with analysis and decomposition. Breaking down a system to gain insight into different elements is the top-down approach. First a total system is developed, and then subsystems are detailed.
There may be many different levels until everything is reduced to a whole. To put it in simple terms, top-down approaches start with the big picture. This concept is broken down into smaller segments for ease of understanding and learning.

In business top down can be illustrated by leadership, business integration, processes, learning and data. Leadership requires an excitement level about improvement processes. Leaders believe that there are processes that can improve the company. Business integration is communication about the new initiative. Processes designate a small team that begins to map current processes from the top to the bottom.  Learning is the process where training in the improvement issues are presented to the entire company. Data gathering at this point in the process is just beginning.

An advantage of top-down reasoning includes the clear statement from executive. Training is given to numbers of employees and a communication is built. There is a full list of process and documentation is required.

Challenge is time. There are many activities, but the analyzing and improving processes are slow yet thorough. Training gives the tools and concepts, but not actual work. Basically there is little change with top down reasoning.

Bottom Up
Bottom up is inductive reasoning. The original idea is the sub-set of the emerging system. This is an information process based on data coming in from difference sources to form a perception. Information processing is determined on incoming data from outside and. individual base elements are described in great detail.

All systems are linked until a complete system is formed. Think of bottom-up as a seed. The beginnings are small but they grow in complexity to form the whole.
Leadership is connected and engages employees in improvement efforts. 
Managers determine what processes in their department needs to be fixed and assign employees to "fix" the processes. Most processes are narrow with improvements implemented in only one area. The learning is hands on training either from a manager, vendor, or professional training organization. Data is real-time and the problems are known up front.
As the process improves the team knows the results, but actual data is limited.

Utilizing the bottom-up approach, changes can be researched and implemented immediately.  Efforts are focused on specific defects and problems and learning is focused on real work problems.
Small improvement in processes and thinking are made quickly, but widespread changes take quite a bit of time to be recognized and utilized.

There is no strategy for focus and improvements generally do not have a huge impact on the entire company. Improvement scan spread, but they can also die quickly. Managers can only fix their own departments (Sweet, 2011).

There is no right or wrong method in reasoning. If your company wants changes in processes to be slow and steady use top-bottom reasoning. If you need quick fixes in departments, bottom-up may be the best approach. Mix the two processes together to come up with the real answers.

Sweet, Shelly (2011). Which is Best for Us? Top-Down, Bottom-Up or Middle Out? Available: Last accessed: 3 April 2013.

Elad Shalom,
Senior Consultant at SwiftRadius

Developing Accurate Cost Estimates

What is the Biggest Problem in Developing Accurate Cost Estimations? Why?

Cost Estimation Methods 

Developing the estimated cost of a project, can be the variance between completing projects on time and being able to complete the project on budget. Techniques for cost estimations are very important and particularly if you are the project manager. Estimating what processes cost and how they work to provide a finished product should be a part of the project proposal. Study the techniques that that you feel will give you the most accurate estimation method.

Types of Cost Estimation Methods
  • Analogous estimating is learning from precedents.  Read though past projects and determine how the cost estimating was based. Analogous estimating can provide a continuous basis for developing estimates based on past learning. Project parameters that can be estimated include costs, budget, scope, and duration. Use analogous estimation to determine the complexity plus the size of the entire project. Compare a current activity to that of an activity that finished in the recent past when the information about the current project is unclear or unavailable. Using expert judgment is highly reliable (Project Management Knowledge, 2010).
  • Parametric estimating is a very accurate method of determining costs. This is based on a previous cost model. Using the cost per line of code, square foot or per cubic inch is parametric estimating. Project managers use this method for construction and certain type of software development. Verified cases provide the basis for parametric estimating. When used correctly, parametric cost estimating is highly accurate. However, there are numerous challenges, in this type of cost estimating and adverse effects can be the result if the formulas and premises are not used properly. Parametric cost estimating is very popular in software development to estimate the development and product life cycle costs.
  • Program Evaluation and Review Technique (PERT) identifies three separate estimates first based on the optimist outcome and then based on the pessimistic outcome. This third point is the premise that the larger the distance between pessimistic (worst case scenario) and optimistic (best case scenario) values the less like the project will succeed. The distance is calculated: O+(4*M)+P/6.  PERT separates each section of the project into events and activities and schedules them in their proper sequence. Paths connect each event and the critical path is the connecting point duration of the critical path is how long the project is estimated to take. There are always delays factored into the paths. However, this method does not find the best way to complete a project (Mind Tools, 2012).
  • Rule of thumb cost estimating is a universally accepted estimating basis. Rule of thumb estimates are unique to every project and to each industry. Rule of thumb cost estimating looks at several completed projects and reviews them as a benchmark of measurement and cost estimating.  According to the Association for the Advancement of Cost Engineering International a cost estimate is "an evaluation of all the costs of the elements of a project or efforts as defined by an agreed-upon scope" (US Army Corp of Engineers, 2000). Rule of thumb is the total estimated cost of a project and is dependent on how well the project is actually defined or what the scope of the project is.

What is the Best Cost Estimating Method?

Every method has its own complications and inherent problems. However, when using analogous or precedent cost estimating, there is a basis and a foundation. All projects have the same elements of supplies, time, budget and finish. By following models that have been already proven, an accurate method of estimating costs can be reached.

Farens, Daniel V. (1999). Parametric Estimating – Past, Present and Future. Available: Last accessed: 22 March 2013.
Project Management Knowledge (2010). The Ultimate Resource for project managers: Analogous Estimating Techniques. Available: Last accessed: 22 March 2013.
U.S. Army Corps of Engineers (2000). A Guide to Developing and Documenting Cost Estimates During the Feasibility Study. Available: Last accessed: 22 March 2013).

Elad Shalom,
Senior Consultant at SwiftRadius

Monday, April 29, 2013

Instance of Project Termination

Describe one instance of project termination you have experienced? What steps were undertaken to close-out the project.

By and large, projects are terminated due to two fundamental reasons: project success or project failure. Project success signifies that the project has encountered its cost, agenda, and technical operation goals and has been incorporated into the organization of customers to play a role to their mission.
Project failure indicates that the project has failed to fulfill its cost, agenda, and technical operation goals, or it is incompetent or unfit in the organization's prospect.

Besides, following are the two broad categories of project termination:
  • Natural termination
  • Unnatural termination
  • Young (2003) draws the attention to some of the causes of project termination:
  • The project domino effect have been delivered to the customer
  • The project has spilled over its cost and agenda goals
  • The project owner's tactic has altered
  • The project's backer has been gone
  • Ecological changes that have an effect on the project
  • The project's main concern is not high enough to go on in the competition
Here's an example of project termination that I have experienced:
I had worked in an automobile company named as HTA. The company terminated the development project for a novel lavish automobile. Code named MB, the automobile would have been the flagship.

After a great deal of thinking, they analyzed that the competition on the marketplace would be much elevated than estimated rendering the project unbeneficial in accordance with their project planning.
In general, the termination judgment does not crop up at any particular point but rather develops gradually all through the project’s lifecycle.

It should be kept in mind that project termination does not refer to a failure, but a tactical verdict put into operation when a project does not or possibly will not back the organizational tactics.
Mostly, due to insufficient resources, time as well as finances, the project closeout phase is ignored. Furthermore, the higher management usually assumes the cost of project closeout needless (Newell, 2004).

Following are the steps in order to close-out the project:
  1. Completion - At the outset, the project manager must make certain the project is 100% finalize. It has been observed that in the closeout phase it is quite widespread to find several outstanding small tasks from initial significant stages still uncompleted.
  2. Documentation - Documentation can be characterized as any text or illustrative information that elucidates project deliverables (Mooz et al., 2003). It is very important that everything understand during the project, from outset via preliminary operations, should be encapsulated and turn out to be an asset.
  3. Project Systems Closure - All project systems should close down at the closeout stage and it involves the monetary system.
  4. Project Reviews - The project review usually takes place after the entire project systems are closed. It is a link that joins two projects that appear one after another.
  5. Disband the Project Team - Prior to moving the employees amongst other resources, closeout phase offers an outstanding prospect to evaluate the effort, the loyalty and the domino effect of each team member separately.
  6. Stakeholder Contentment - It elucidates that actions as well as activities are indispensable to validate that the project has encountered the entire supporter, customer and other stakeholders’ necessities.

Mooz, H. Forsberg, K. & Cotterman, H. (2003) Communicating Project Management: The Integrated Vocabulary of Project Management and Systems Engineering. John Wiley and Sons.
Newell, S. (2004) Enhancing Cross-Project Learning, Engineering Management Journal, 16(1), pp 12-20.
Young. (2003) The Handbook of Project Management: A Practical Guide to Effective Policies and Procedures. 2nd Edition, Kogan Page.

Elad Shalom,
CTO at

Project Plan Enhancements

What enhancements would you make to the outline we're using for the project plan in this class? What would you recommend? Justify your additions.

Project plan is a standard official document that is utilized to direct both project implementation as well as project control. The chief uses of the project plan are to document planning suppositions and decisions, smooth the progress of communication between stakeholders, and document agreed scope, cost, and calendar baselines. In addition to this, a project plan might be abridged or comprehensive.

To keep the plan up to date is an imperative job of the project manager (Lewis, 2000). Project updates concentrate on the following three constrictions of project management:
  • Cost
  • Time
  • Scope
An efficient project manager fully understands that an amendment in one of the above domains consequences in changes in the rest two. An up to date project plan will exhibit the upshot of these changes on the entire project (Lewis, 2000).


Particularly in projects having long duration, costs can ebb and flow. Costs of material can alter, internal employees can be fostered with a subsequent enhancement in their hourly rate, and modifications in external outworkers can consequence in elevated agreement rates. These cost changes should be included on the project plan once they are identified and considerable variances communicated to the decision-making team and/or backers (Wysocki, 2006).


As soon as a project plan is approved, a baseline is set up and the project has momentum, the project manager will trace time worked on a daily basis. How frequently this comes about relies on how recent the team requires the plan to be. Updates of weekly basis are nothing out of the ordinary, however, daily might be preferred for projects which are extremely time-sensitive.

It may be a bit hard to remind the team to record their time worked and on what. At times, a project manager is employed to hunt down time and/or jog the memory of team members to report or record their time. Besides, this is a fine time for the project manager to bring the resources part of the project schedule up to date. Vacations as well as team accessibility should be evaluated and reorganized as required.
In addition to this, anticipated arrival of supplies and accessibility of outside resources have an effect on the project (Wysocki, 2006).


Undoubtedly, any amendment in the project’s scope should be made in the plan when identified and as soon as probable. A number of what-if circumstances are expected to be done before the approval of any scope changes; it relies on the tools employed via the project manager in order to keep up the plan. It is the finest approach to find out the influence of scope changes on the project calendar as well as costs prior to actually allowing the changes (Wysocki, 2006).

It is beneficial to make a set-point in the project plan. At the outset, keep posted the time worked and any recent cost amendments in the plan. Keep this plan as the up to date set point. Afterward, include the scope changes and alter any other factors involved and put aside this as a latest project plan with the exact amendment date.

Lewis, J. P. (2000) Project Planning, Scheduling & Control. 3rd edn. McGraw-Hill.
Wysocki, R. K. (2006) Effective Project Management: Traditional, Adaptive, Extreme. 4th edn. Wiley.

Elad Shalom,
CTO at

Perception of Project Management

Consider how your perception of project management has changed during these past eight weeks

Mind, thoughts as well as perception usually changes with the passage of time and the level of knowledge. Everyone possesses a diverse notion about what project management stands for. For some individuals, it is considered on the top of a plain mutual to-do list.
While other considering it as arranging a huge set of resources to generate an extensive deliverable. During the past 8 weeks, my perception about project management has changed and I have come to know that project management is the approach by which business goals are attained. It is regarding being structured from the start of the job to the closing stages.  It is about having an efficient and well-organized team with a leader that can promote collaboration and stimulate to obtain domino effect. 

At present, project managers around the world synchronize the hard work of individual resources to fulfill tasks along with deliverables as constituent of project plans. In olden times, the tools accessible to project managers have been more analogous than dissimilar.

Project management is not simply for project managers any longer. For a lengthy period, project manager was a function, not a designation. Individuals stated that they have come to the site by mishap. In the past few years, this all has been modified. Project manager profession routes have been developed by the organizations. Now, business divisions, for instance, human resources, sales and marketing, and other departments require their staff qualified in project management too (Mooz, Forsberg & Cotterman, 2003).

In our day, a number of diverse techniques are utilized to initialize project management. The team’s manager will put the appropriate individuals in place. Each of them will look after his or her section of the project. If every person performs their job, they will get victory in a quick manner. The manager of the project will want to watch over on how things are proceeding to make certain there are no unexpected concerns that could turn out the project unsuccessful.

The function of project manager is one which permits them to seek some of the finest contemporary tools, programs and modes of working day after day. There are particular levels that every business manager should be familiar with. The foremost one is to concentrate on the organization. It is very important that the team’s leader is competent to systematize properly, giving the appropriate people the appropriate jobs. They will comprehend whose abilities will go with what assignment and that is how they will find out who performs what. They should initially, as a team, scrutinize the problem and make a decision exactly what it requires. They should determine goal and the finest technique to attain it (Wysocki, 2006).

To attain an objective in an effectual manner, the members of the team should add the whole lot they can. They should present esteem to the other team members and collaborate fully (Wysocki, 2006). When an issue or a hindrance comes about that might delay the development of the plan, it should be informed to the entire team member and a verdict should be made to avert further destruction. Via obtaining input from everyone participated, an elucidation can generally be determined that will let the original objective to be fulfilled with a small number of changes or amendments.
Moreover, a first-rate team will come across that their domino effects are precise as well as constructive.
Any amendments to make the data comprehensible should be done along the fashion, however, if there is time to re-evaluate and ascertain, this is always a fine perception.  

Mooz, H. Forsberg, K. & Cotterman, H. (2003) Communicating Project Management: The Integrated Vocabulary of Project Management and Systems Engineering. John Wiley and Sons.
Wysocki, R. K. (2006) Effective Project Management: Traditional, Adaptive, Extreme. 4th edn. Wiley.

Elad Shalom,
CTO at

The Chaos Report

The Chaos report lists a variety of success factors that affect project management. This includes user involvement, executive management support, proper planning, clear statement of requirements, realistic expectations, ownership, hard-working and focused staff. Pick one of these, or research an alternative, and assess how this success factor would impact your particular project.

The project managers always search for clandestine formula which will make the projects booming. There are some vital items that need to be considered and ensured in a proactive manner. One searches for those intangible significant success factors that can be handled to make an atmosphere favorable for the accomplishment of the project. According to the Chaos report (2009) the success factors involve User Involvement, Executive Support, Clear Business Objectives, Emotional Maturity, Optimization, Agile Process, Project Management Expertise, Skilled Resources, Execution, Tools and Infrastructure.

User Involvement

User involvement is an important notion in the development of constructive and functional systems and has clear-cut consequences on system success and user contentment. These days, user involvement in design procedures is often missing because of the increasing number of parties and experts that take part and also due to the restricted time to hand over the design. If user involvement lacks then it will easily bring about function and performance issues.

User involvement is an extensive and imperative aspect for planners and architectural model to develop an appropriate product that will work for the users. However, the practice illustrates that users are generally consulted comprehensively at the beginning and occasionally during the entire project. Firm procedures give the tuning of design and implementation procedure required for the adjustment.

According to Barki & Hartwick (1994), user involvement is expressed as a particular psychosomatic status of the user. It signifies the degree to which a user sees IT in terms of its importance as well as personal significance.
User participation is anticipated to be a predecessor to user involvement, for the reason that active participants in IT development have a tendency to develop view that IT is both imperative and pertinent (Barki & Hartwick, 1994).
The more users get involved, the more they will be receptive to a new system although there might be dependent factors hindering the association. Besides, chief management’s leadership may be completely linked with user involvement in the systems development via giving accurate and rational information to users (Barki & Hartwick, 1994).

According to Yoon et al. (1995), user contentment is assumed to be definitely associated with user involvement. Also, systems utilization is anticipated to be absolutely linked with user involvement. Quality of the systems is expected to be improved when more users are involved in the systems development (Barki & Hartwick, 1989).
In addition, users are more expected to come to take hold of an enhanced sense of the systems. This possibly will facilitate users reducing their anxiety, uncertainty, unwillingness to the systems which causes users more accessible to the systems. This encourages a constructive illustration of the systems, which promote users to take on the systems (Leonard-Barton & Sinha, 1993).

Barki, H., & Hartwick, J 1994. ‘Measuring user participation, user involvement, and user attitude’, MIS Quarterly, vol. 18, pp. 59-82.
Leonard-Barton, D., & Sinha, D. K 1993. ‘Developer-user interaction and user satisfaction in internal technology transfer’, Academy of Management Journal, vol. 36, pp. 1125-1139.
Yoon, Y., Guimaraes, T., & O’Neal, Q 1995. ‘Exploring the factors associated with expert systems success’, MIS Quarterly, vol. 19, pp. 83-106.

Elad Shalom,
CTO at

Wednesday, February 27, 2013

Possible Outcomes for Component Replacement

Discuss why savings in cost from reusing existing software is not simply proportional to the size of the components that are used. What other factors affect the cost?

Principles of Component Independence and Possible Outcomes for Component Replacement

There is a general agreement in the software engineering industry that a component is an independent software unit which can be composed with other independent units in order to create a software system (Sommerville, 1989). Another commonly accepted definition is that a component can be independently deployed and composed without modification according to a composition standard (Councill and Heineman, 2001).

In any system, software or hardware, in order to determine its reliability it is important to firstly establish component independence.
This is typically achieved through independent component analysis (ICA), which is a computational technique for revealing hidden factors that underlie sets of measurements or signals (Oja, 2001). The two most commonly used definitions which interpret component independence are minimization of mutual information and maximization of non-Gaussianity.

ICA is important because independence of components is a fundamental requirement for calculating system reliability (Woit, 1998) and can, to some extent, predict and prevent the possibility of system failure. Component-based system need to evolve over time in order to prevent system failure and to add new functionalities.
This evolution is typically controlled through the usage of components, which are the units of change. When one component becomes redundant, it is replaced with another, which also adheres to a standard of independence, but it is implemented in a different way.

The concept of component replacement relies on removing a component which no longer functions properly or that no longer serves the purposes for which it was initially implemented with another component that can fix the error brought on by the initial component or that can add a new function required by the system’s natural process of evolution.

However, although component replacement is practiced as a means to avoid system failure, this practice can have the very adverse effect.
As stated before, reliability of the system is strongly connected to the independence of the components that form that specific system.

Replacing an old component with a new one requires a thorough and complete analysis of the new component, both isolated and in combination with the other components forming the system.

The reliability of a system depends on how its architecture and the component interfaces can coexist in equilibrium. The introduction of a new component, with its own specific interface, might facilitate some kinds of system architecture while precluding others (Nejmeh, 1989).
In order to foresee these events, the interface of a new component has to be discovered and analyzed prior to its introduction in the system (Brown, 1996).
However, current engineering practices and techniques do not allow for a complete analysis of a component’s interface meaning that, even though a certain component may seem like a perfect match when considered alone, it can lead to system failure once implemented.

The principle of component independence, which serves as a foundation for any component-based system implies that one independent component can be replaced with another independent component, which is implemented in a different way, but that manages to ensure the coherency of the system.
However, although the independence of components is a means to ensure system reliability, such a replacement can ultimately lead to system failure, mainly because current engineering practices do not allow for an accurate analysis and evaluation on how a certain component, which has not yet been implemented, will act once integrated in the system.

  • Brown, A.W. (1996). Engineering of component-based systems. Engineering of Complex Computer Systems. Second IEEE International Conference. P414-422
  • Councill, W.T and Heineman, G.T.(2001)  “Component-Based Software Engineering as a Unique Engineering Discipline”, Chapter 37 in G. T. Heine¬man and W. T. Councill, Editors, Component-Based Software Engineering: Putting the Pieces Together, Addison-Wesley, Boston, MA, pp. 675-964.
  • Nejmeh, B (1989). Characteristics of Integrable Tools. Technical Report, Software Productivity Consortium
  • Oja, E . (2001). Independent Component Analysis. Helsinki University of Technology.
  • Sommerville, I (1989). Software Engineering . 3rd ed. Edinburgh : Pearson Education Limited . 405-430
  • Woit, D.M. . (1998). Software component independence. High-Assurance Systems Engineering Symposium, 1998. Proceedings. Third IEEE International. 3 (1), p74-81

Elad Shalom,
CTO at

Phishing for Romance

Phishing is a type of online fraud that tries to trick you into revealing personal financial information, passwords, credit card numbers, etc. In most cases, phishing takes the form of an e-mail message claiming to come from a bank, credit card company, online retailer or some other legitimate source. Take the SonicWALL Phishing and Spam IQ Quiz (available at

Phishing for Romance

Phishing is a form of social engineering in which an attacker, also known as a phisher, attempts to deceptively repossess genuine users’ private or sensitive qualifications by mimicking electronic connections from a trustworthy or public organization in an automated fashion (Jakobsson, 2007). Phishing techniques circumvent an organization or individual’s security measure. It nullifies any firewalls, authentication software, and encryption due to the fact that most “phisers” nowadays uses social engineering to entice possible targets.
Attackers can use different methods in phishing which can vary from a simple phone phishing to website forgery.

The most commonly used method is via e-mail. Attackers can send large amount of e-mails by the use of bot-nets or zombie nets. They will inbox a number of fraudulent e-mails which has links to direct them to a phishing website.
Modified versions of this method is been seen throughout the years and the profile of possible targets also changes. One version is the so-called Romance Scam. Victims would receive an email from an individual stating that he/she saw your account on a social network and proclaims his/her “love”.

Men and women in their mid 40s to 70s with a status of separated or windowed are most likely the target of this scam. Once contact is made, the primary goal of the “lover” is to create a rapport with the victim. He/she would tell the victim that he’s/she’s an engineer for a company with a young daughter and currently based in London or California.

The “lover” will send countless love poems or letters, which are likely copy pasted, to the victim justifying his/her eternal love. Once the victim is groomed, the ‘lover” would pitch in that he/she wants to marry the victim and promises to send money to buy their dream house.

Victims will be subjected to an ecstatic feeling of joy thus overriding their common senses.
Now that the victim is “hooked”, the scam artist will create a story on how the victim can receive the promised money.
They will be instructed that an e-mail coming from a bank containing a transaction slip is needed to be process and signed by the victim.

The transaction slip is almost a true copy of the real one but with some modifications. Some of which contains a part where the victim needs to indicate the CVN/PIN number of her/his credit card or bank account. Signature of the victim is also needed to be indicated and once receive by the scam artist, the account or credit card will be use fraudulently to purchase items.

This is a good example that emotions can contribute in the success of fraudulent activities. We are just human to commit errors but it’s not a reason not to be vigilant. In preventing, users should practice better judgment and not fall to false pretenses.

The technically savvy should not dismiss the facts that technology is also a factor. The lack of information or outdated information greatly contributes to this issue. Developers must go beyond blaming users if they expect to deploy effective countermeasures against phishing attacks (Hong, 2012).

 Tell Tale Signs of a Romance Scam
  • Indication that your profile was seen on a social website
  • Attackers proclaim their “love” the minute you answer their e-mails
  • The usage of an appealing intro like an engineer for a petroleum company, widowed architect, a businessman traveling from country to country. Followed by the heartwarming indication that his/her spouse has died in an accident leaving a young daughter.
  • Asking about personal information regarding bank accounts, credit cards and other monetary information
  • Asking for monetary assistance for certain circumstances like being held in the airport by customs officials, certain tax needed to be paid for a luxury item
  • Promising ridiculous amounts of money to the victim
  • When chatting with the scammer, his accent is clearly not of his said birthplace

  • Jakobsson, M. and Myers S. (2007) “Phishing and Counter measures: To Understand the Rising Dilemma of Electronic Individuality Theft”: John Wiley & Sons Inc.
  • Hong, J. (2012) “The State of Phishing Attacks” Communications of the ACM, Vol. 55 No. 1, Pages 74-81

Elad Shalom,
CTO at

Reuse in Component Based Software Engineering

Discuss why savings in cost from reusing existing software is not simply proportional to the size of the components that are used. What other factors affect the cost?

Reuse in Component Based Software Engineering

Software reuse is the process through which an organization designates a set of operating procedures to specify, produce, classify, retrieve, and adapt software components with the purpose of using them in development activities (Parnas, 1994). One of the main reasons organizations have adopted component based software engineering (CBSE), a system which has highly reusable qualities, in their software development process is the reduction in development costs and increase in productivity.
Software reuse represents reusing an asset, or a component, in a different system that the one in which it was initially used (Frakes and Fox, 1995). The term software reuse might be, at first glance, somewhat misleading, but it is by no means something that can be achieved free of cost (Lim, 1994). Software reuse is a long-term investment, which can, apart from reduce cost, also increase the productivity, quality, and reliability (Haddad, Ross, and Kaensaksiri, 2010) of component based software.

Software reuse requires an array of resources needed to set up a reuse library, reuse tools, and reusable products, which will represent the foundation for future reuse projects. In some cases, software reuse may be an investment that is not worth the benefits it offers to a certain organization; there are situations when building new software with no reused assets is significantly less costly than reusing assets. In order to decide if software reuse is a feasible approach and to determine the exact cost of such an operation, each organization should undergo an accurate cost analysis.

There are numerous factors which affect the cost of software reuse. Initially, an organization has to properly describe the software or product which is being developed in order to identify its requirements. This will allow the developer to search for already existing assets which could benefit the new software. In this stage of the process, costs will relate to trials, verifications, and acquisition of assets or components.

Subsequently, the developer will be required to invest in modifying the acquired assets in order for them to perfectly suit the new software system. Some components may require significant modifications, which could result in higher costs and more effort involved than creating a component from scratch would need (Boehm, Abts, and Chulani, 2000).

New software cannot be build entirely on already-existing assets, so any software reuse process will require investments in the development of new components as well.  Additionally, there will also be costs relating to the integration and testing of old and new assets in order to see how they work together. Moreover, before the new software can be ready for launch, additional money will have to be invested in infrastructure.

The exact cost of software reuse cannot be pinpointed precisely in most cases, but, usually, an accurate analysis will offer a rough estimate which will enable the developer to decide whether it is more advantageous to reuse components or to design entirely new software.

In conclusion, there are numerous factors that affect the cost of software reuse programs and each of them has to be taken into account when deciding whether reuse is the best approach an organization could use in the development of new software. Software reuse is much more complex than simply taking old assets and coming up with a new product and, consequently, its costs do not depend solely on the size of the reused components.

  • Boehm, B., Abts, C., and Chulani, S. (2000). Software development cost estimation approaches - A survey. Annals of Software Engineering,  10(1-4). Springer, Netherlands, November 2000
  • Frakes, W.B and Fox, C.J.  (1995). Sixteen Questions about Software Reuse. CACM. 38 (6), p75-87
  • Haddad, H, Ross, N., and Kaensaksiri W. (2010). Software Reuse Cost Factors. Department of Computer Science, Kennesaw State University, GA, USA
  • Lim, W. C. (1994). Effects of reuse on quality, productivity, and economics.  IEEE Software,  11(5), p23-30
  • Parnas, D. L. (1994). Software Reuse and Component Based Software Engineering. 16th International Conference Software Engineering.  

Elad Shalom,
CTO at

Cyberspace Censorship or Lawlessness

For this discussion, we will talk about “freedom of speech in cyberspace”. Please let us know any of the recent events (one event) from the news that illustrate a positive or negative implication of the impact of the Internet on the actual protection of the freedom of speech. What is your opinion on the event?

Cyberspace: Censorship or Lawlessness

Freedom of speech is the right to express opinion without censorship or restraint. Freedom of speech in cyberspace is a highly debated topic in the advent of the Internet. Cyberspace is a lawless zone where the weak are prey to the strong.

Due to the surge of fraudulent and unscrupulous entities attacking websites and sensitive data theft, governments are taking action in censoring/limiting the usage or access to certain sites.
In revolt to these actions, citizens of the Internet made uproars in response to this. Social media site users retorted by using black backgrounds as profile pictures, creation of hate pages, and viral commentaries.

Radio stations where flooded by calls from angry Internet citizens whom want their voice to be heard on air. The infamous Anonymous group took down government sites as a warning to all government entities. These are just examples of how the people responded.

What kind of universality would it be if censorship was to rule the Internet and what would universal access mean if it were access to only some information, only some ideas, only some images, only some knowledge?(Matsuura, 2005). Yes it is a fact that the purpose of the Internet is to provide universal access to all. What will we do if that purpose is blatantly abuse to inflict harm socially, mentally, or sometimes physically?

Cyberspace has evolved into an entity that benefits man daily. May it be in a form of commerce, education, media, or entertainment, cyberspace will always be a part of our daily lives. One key element that the Internet provides is the ease of communication.
You can chat, you call and you can even send sms/mms messages by means of the Internet. This also gave the rise to social media which became a connecting medium to socialize.

There’s a vast functionality that the internet provides. It can take days or weeks to enumerate each one. In summary, cyberspace is an inexhaustible tool that is free to use by everyone. The Internet has frequently played an important part in such engagements by enabling community to join and trade information instantaneously and by creating a sense of harmony. (La Rue, 2011).

There are two sides in everything including cyberspace. As the years passed by, the evolution of the information highway sprung some factors and features that is use predominantly by destructive entities. One to mention is of pornography.
Nowadays, pornography has become a tradable market which the yearly revenue amounts to billions.
This abecame a breeding ground for pedophiles and child molesters. Cyberspace also became the output of ‘hacking activities” in which they prey upon the average person. These are just some of the ugly truth that we seldom neglect to take action.

Knowledge is power as the saying goes but with unethical usage, this virtual land becomes a lawless society that anyone can be a victim. Yes, it is true that freedom of speech should not be limited or censored but we should also keep in mind that this society is governed by ethics.
We should not limit or censor the virtual landscape, what we should do is applying good judgment upon using the Internet.
We have the capacity to do that. We sometimes lack is the focus and the sense of responsibility in doing the right thing.

  • Matsuura, K (2005). International Conference on Freedom of Expression in Cyberspace Paris, France. February 3-5, 2005 (UNESCO)
  • Freedom and security in cyberspace: Last accessed on February 27,2012
  • La Rue, F (2011). Freedom of expression everywhere, including in cyberspace Nov. 4, 2011

Elad Shalom,
CTO at

Tuesday, February 26, 2013

Goals And Techniques Of Process Analysis

Discuss the goals and techniques of Process Analysis

Process analysis involves the series of events that result in an achievement. It tells you how this series of events occurred. Process analysis is of two types, informational and directional. Informational analysis asks the question,” How is this done?”
This analysis tells you how a certain thing was done or achieved. Directional analysis, on the other hand, asks the question, “How can you do this?” Here you examine how you can do a certain thing so that the process can be repeated. Directional analysis gives directions to a certain process.

The purpose of performing a process analysis is to understand how to do a certain thing or how it works, to ascertain how effective a process is or to argue about its significance. The goals of performing a process analysis are, to evaluate completeness, to identify the factors that make maps difficult to use, to isolate bottlenecks, to measure process time, to find redundancies, and to examine resource allocations.

While analyzing a process, you ensure that it is performing properly and giving maximum productivity with minimum bottlenecks. Process Mapping is the first step in process analysis, which involves creating a visual presentation of the entire process.

Once it is mapped, the process is methodically analyzed to identify the bottlenecks or the constraints that hinder the flow of the process (Belize 2011). According to the Theory of Constraints given by Goldratt in 1986, the main focus is to identify the bottleneck first, and then to ensure that the complete process is functioning at a speed to equal the bottleneck.

Philip Ullah and Mike Robinson suggest one particular technique of process analysis, the “Value-added Analysis”, which is performed at each step of the process. In this analysis, each step is categorized into one of three categories.
First category is the “step adds real value”, second is “step adds business value”, and the last is the “step adds no value”.
Once all the steps within the process are put into their categories, the next step of value-added analysis is to speed up the steps that fall into the first category, or those that add real value to the output of the process.
Then the business value steps are minimized or eliminated and the no-value steps are entirely eliminated. This is done through automation and process re-designs (Ullah & Robson 1996).

Another common technique used for process analysis is the “Cycle time analysis”. In this technique, distinct maximum and minimum processing times are allocated to each process step. Delay and lag times are also measured for each step.
This technique usually reveals that out of the entire process time only 5 or 10% is the only actual work time. Such findings help you to recognize areas that need improvement and suggest measures to make these improvements for the future. Cycle time can be reduced by electronic work flow and centralized data stores.

Other than these two techniques, there are other techniques that can be used for process analysis. The important ones are gap analysis, root-cause analysis, examining experience, and observation. Other common techniques are customer requirement analysis, Pareto analysis, Matrices analysis, supplier feedback, role playing, and so on (Long 2012).

The process scrutiny stage is frequently the stage that is not given much attention because of various reasons. Nevertheless, it is also the stage that is most likely to bring about the highest Return on Investment that all other phases.

  • Belize, D. (2011). Process Analysis Tools and Techniques. Available: Last accessed 6th February 2013.
  • Robson, M. and Ullah, P. (1996). A Practical Guide to Business Process Re-engineering. England: Gower Publishing Ltd.
  • Long,K.A.(2012). Outline of Common Procedure Analysis Techniques, Business Rules Journal, 13 (12).  Available: Last Accessed 6th February 2013. 

Elad Shalom,
CTO at

Monday, February 25, 2013

Process Metrics Effectiveness

Choose a process metrics to discuss. How effective would it be to help process improvement? How easy is it to use?

Process metrics are used to measure the components of processes which are used to obtain software. (Singh et al. 2011) Process metrics are measurements that deal with a specific project or program and enable organizations to obtain, evaluate, and communicate excellent results of process improvement.

Time is the key element that effect process metrics, as it plays a big role in the quantitative analysis of projects. Therefore, comparing the time delta between proposed time and increased time is a significant component for project completions. Hence, process metrics are very useful to assess the improvement of a process (Zelkowitz, 2009).

Process metrics are the means for the software developmental project to be sustained throughout its life cycle. These metrics are collected in order to form strategic decisions about long-term process improvements. There are several process metrics and some of them are related to and dependent on other processes. One such process metrics is the Fault or Error reporting metric, which we will be discussing here.

Fault or Error Reporting Metric

The end goal of any software development process is to come up with a software system that meets the requirements of the business, while being done on time within the financial budget, and is easy to manage with enhancements and changes (Kan 2002).

 Source: GQM Paradigm -

The GQM or Goal Question Metric Model is an easy way to ensure that the metrics that are collected are closely related to the business goals (Cammarano 2007). The software development process is a key process specially in QA testing. Fault or Error reporting metric can greatly impact this process. This particular metric analyzes average time the development team spends on correction of error. Most software projects are carried on under strict time limits, hence, it is essential that modules are developed on time and are as free of errors as possible. However, due to interactions between the individual modules with the larger application, developing a completely error-free module is not always possible. Another issue is the error reporting structure.  A specific module may function as designed, but given for testing, it is combined with the larger application. Thus during testing, the entire application is tested and hence individual errors in the specific modules may be difficult to recognize.

Fault/Error Reporting Metrics and Process Improvement

It has been reported that using this error reporting metric can have a positive effect on the entire software development process. Some important areas of the process that can be determined using this metric are average resolution time, average amount of errors per module, error impact critical, and potential enhancement opportunities. This helps in improvement the process development directly. It is challenging to make an application stable and fault tolerant, based on components like supporting applications, hardware infrastructure, etc. Hence, fault/error reporting metrics can definitely be effecting in helping process improvement. It also helps in understanding the financial impact and impact on resources that can result in the project being completed as per the specified deadline.

How easy is it to use?

There are certain difficulties faced when using error reporting metric. These are identification and ease of usage. It is very easy to record errors with this, but the subsequent phases of analyzing and identifying the system errors pose a problem. There have been cases when users have wrongly reported a warning as an error, which has lead to negative impact on investigation timing.


Process is always done either to help in process improvement or to evaluate if the success criteria of a project is achieved. Once you are clear about the strategy you are going to use for improvement, you can choose and collect the suitable metrics. It is only by collecting metrics that a project or organization’s success and performance can be validated. Metrics are the proof, without them, it is only guess work. Process identification, analysis, measurement and change are critical factors for any software development process.

  • Singh, G., Singh, D. and Singh, V. (2011). A Study of Software Metrics. IJCEM International Journal of Computational Engineering & Management, Vol. 11.
  • Zelkowitz, M.V. (2009). Process Improvement. Available: Last Accessed 11th February 2013.
  • Cammarano, R. (2008). Goal Question Metric (GQM) Model. Available: Last Accessed 11th February 2013.

Elad Shalom,
CTO at

Overlooked Risks In Software Development

What software risks are most commonly overlooked or not managed well within your organization? What mitigation strategies would you recommend to lessen the severity of the risks? Do you have any suggestions for how these risks may be prevented/ avoided in the future?

Risk management is vital for software development projects. Software risk management is needed during project execution for control purposes and project planning. It helps to reduce the chances of project failure. The first step in software project risk management is to identify a set of risks and record then in a checklist (Arnuphaptrairong 2011).

Installing mapping system and other large software projects have proved beneficial to organizations. However, the risks involved in such an implementation are huge. The threat of a risk establishes the need for a systematic and aggressive risk management process to make sure the project is a success. The risk management process should address three main risks linked to implementations of big systems- organizational, business, and technical risks. Most people know all about the technical risk and are familiar with how to manage it (Campbell 2001).

Often the other two risks, organizational and business risks are overlooked. However, for the software projects to be successful these two risks must also be measured and controlled. Organizational risk tries to measure the possibility that the client or user will not take advantage of the complete potential of the system.

Resistance to change and insufficient user preparation are the reasons that this could happen. Organizational risk is specific to the organization that develops the project (Nielsen 2009). 
On the other hand, business risk will determine the chances of the newly implemented system failing to result in financial benefits and productivity that have more value that the cost incurred in achieving them in the first place.
This failure could be because of a number of factors. Generally the main cause is that there is no alignment between functions fixed in to the system and the priorities and business strategies of the company.

Changing management initiative, although powerful, can only go so far to mitigate organizational and business risks. We need stronger initiatives that not only involve preparatory training , but also various elements specially designed to make sure that the new system is fully incorporated into the daily operations of all the particular departments, in the specified duration of time.

Thus, operations integration must be performed to lessen the severity of these two risks. The process of operations integration comprises of the information and approaches needed to make sure that the new system delivers the target result within the specified time and budget.
  • In order for this mitigation strategy to succeed, all departments to use the new system must follow the practices in a disciplined manner. Some of the actions that are included in this strategy are as follows:Establish clearly which employees are going to use the new system.
  • Communicate clearly the new system’s corporate goals.
  • Accurately aligning the work processes of the company with the business processes fixed into the new system.
  • Recording the aligned processes in the organization’s manual of Policies and Work Procedures.
  • Providing training sessions for the users so they can understand the work functions of the new system.
  • Developing disincentives and incentives for incorporating the new system into the organization.
It is up to the top management to lead strongly and emphasize the importance of starting operations integration at the very beginning of the project. Consequently, the management needs to make sure that the operations integration process is not led and carried out by the IT department, but rather by the senior manager of the user teams that will be working with the new system.

It is a fact that the development of risk is inevitable. Therefore, it has to be accepted and managed in any software project. However, we must remember that every project will need to manage three risk components- organizational, business, and technical. None of these should be overlooked in order to successfully and efficiently implement a new system.
  • Arnuphaptrairong, T. (2011). Top Ten Lists of Software Project Risks: Evidence from the Literature Survey. Proceedings of the International MultiConference of Engineers and Computer Scientists, Vol. 1. Available: Last Accessed 22nd February 2013.
  • Campbell, M. (2001). The Two Overlooked Aspects of IT Risk Management. Available: Last Accessed 22nd February 2013.
  • Nielsen, D. (2009). Identifying Risks to Software Projects. Available: Last Accessed 22nd February 2013.

Elad Shalom,
CTO at

Legacy System And Client-Server System

Some of you have lived through the conversion of a legacy system into a client-server or distributed system. Others have tangentially been involved, while others may have just heard about it. With the information you have read in this lecture and the text, and/or your own experiences, discuss some possible problems that might arise in the conversion.

Software and applications that function according to old technology even though there are newer technologies available are known as legacy systems. Because the costs of replacement are usually high, not to forget the efforts, legacy systems generally are left as they are (Burke 2011).Thus, in most cases bequest system is utilized for the reason that the system where effort and time is needed in understanding it in order to change it and put a better and newer technology in its place. 

The incentive behind developing a web browser system by replacing the bequest system is to use a sole client for every platform. Here a single set of cods will be kept and utilized by the platforms (Dossick & Kraiser 1996). One other factor that pushes for conversion is because replacement and change of such a system will aim at specific users and situations, and changing legacy system in to a thin client generally results in a significant decrease in the needed training cost. The changeover between applications will also be much simpler for the user.

Possible Problems that may arise in the conversion

When organizations decide to incorporate legacy systems with newer technology, the biggest challenges faced are due to the new hardware and software. Even if they find a way out to bring about such integration, it usually involves lots of money and effort just so that the organization can create and continue the entire process of integration. For instance, most computer systems function on flat-files and incorporating a web portal interface with such systems adds an additional burden to the dependability of the new system as it needs extra effort (Rea 2011). The solution to this is for the organization to create a new system in order to leverage the legacy system data on the network. This is done by developing a network hosting system and a front-end function.

The fact that it costs a lot to maintain and operate, is one of the major issues of legacy system. Costs are also incurred in making sure that the IT team continues to work using bequest system for a elongated period of time. This is definitely a huge responsibility for an organization. One of the common options that organizations choose is they replace the legacy system and use an already established package or internal built-in system that can fulfill the requirements of the business, and thus improve business processes.

Conversion of legacy system to a client-server or distributed system is all about retaining and extending the investment cost of the system by migrating to new and latest platforms (Good 2002). The conversion process adds on new abilities to the latest system and lowers operational costs by implementation other technologies like web services.
According to Burke (2011), the support and maintenance of bequest system is a difficult task because the changes in technology form a bigger barrier for the functioning of such systems. Ultimately organizations are on the lookout for current and updated systems and technologies to replace the older systems.

In the end, one major problem that might occur during conversion is that the new client-server systems (fat or thin clients) may be created with documentation management services and flat-files rather than with rational record.

  • Dossick, S. and Kaiser, G. Dossick, S (1996).  WWW Access to Legacy Client/Server Applications. Available: Last Accessed 21st February 2013.
  • Burke, A. (2011). Definition of Legacy System. Available: Last Accessed 21st February 2013.
  • Rea, W. (2011). Problems with Legacy Systems. Available: Last Accessed 21st February 2013.
  • Good, D. (2002). Legacy Transformation. Available:

Elad Shalom,
CTO at

Where Not To Use SOA

Giving reasons for your answer, suggest two types of applications where you would not recommend the use of service-oriented architecture and why.

SOA or Service Oriented Architecture is an architectural concept where components of systems depict data and functionality in the form of services. These services are accessible by various other components with the help of certain standard-based technologies (Thomson 2008).

With SOA one can create new applications by mix-and-match. The first step is to choose on the application needed, next identify the present components that can aid in building up the application, and lastly mix them all together (Gralla 2003).

Although SOA seems to be increasingly popular in the present day, it is not a new concept at all. It exists since the 1980s. However, the idea didn’t take root as there was no application programming interface or standard middleware that could enable it to do so. With the development of Web services, SOA has resurfaced. The underlying architecture of Web services fit together perfectly with the SOA approach. It has even been said that SOA is the key to the future if Web services (Gralla 2003).
The value of SOAs in certain situations cannot be denied. However in circumstances when IT environment is homogenous or is not really expecting any change, then SOA is not recommended (Bloomberg 2004).

If your organization only has a single vendor delivering technologies, then SOA will not be very cost-effective. For e.g. say the main purpose of your IT infrastructure is to run a website, it is not a very widespread purpose. You may have a database, two Web servers, and an application server. An additional SOA will not add significant value to your IT investment. Generally, smaller companies utilize a small homogenous network hidden behind their firewall. For such companies and those that implement single vendor technology an SOA is not a very practical addition to their basic infrastructure (Bloomberg 2004). 

Another application where Service Oriented Architecture is not recommended when there is not expected change in your IT infrastructure. This situation is says “don’t spoil something that’s already working”. You have an old legacy system sitting in the corner for year’s together, gathering dust, and there is reason to believe that things are going to change in the future. Then why mess with it? This theory applies to big and small companies and organizations. Even I can say that I have computers that are more than six years old, in perfect condition and running Windows 98. Some of you may even have one with Windows 2000 approaching its fifth anniversary.

The point is that these systems, however old, may be doing a good job at what they were doing so many years ago, as long as you don’t fiddle with them. This is not only true about the systems but also the applications functioning on those systems.
Practically speaking, if there is hardly any reason to bring changes into the business logic, data flow, process, presentation, or other aspects of the application, then changing these relics to SOA may not really be worth all the effort to do so.
In the end, new approached do add supplementing value, but they never really replace those approaches that have existed for a lifetime.
Hence, it is very important to understand when to implement these new approaches and when not to.

  • Bloomberg, J. (2004). When Not To Use an SOA. Available: Last Accessed 21st February 2013.
  • Gralla, P. (2003). What Is Service-Oriented Architecture? Available: Last Accessed 21st February 2013.
  • Thomson, D. 2008. Application of Service-Oriented Architecture to Distributed Simulation. Available: Last Accessed 21st February 2013.

Elad Shalom,
CTO at

Risk Management Deliverers

List two or three risk management deliverers that you have seen or produced during a past or current project (such as: risk list, risk status report, risk management plan, expected value report, risk monitoring report, risk response form, etc.).

Describe their effectiveness.                                                        
When during the project life cycle were they developed?
How much training did those involved in the related processes receive?

Risks are part of every project. For a project to be successful, the key is not to avoid risks, but to know and understand them. A risk is the probability of occurrence of a condition or event that would negatively affect the project development process. Risk management involves identifying, understanding, and managing known risks so that the possibility of fulfilling the project objectives is increased.

The reality and challenges that are faced while applying software risk management processes are a problem, especially when it comes to incorporating the risk management process to the software development organization. In spite of all these difficulties, using risk management techniques and tools in project development processes is very beneficial (Kwak & Stoddard 2004).

Risk definition starts with identification of risk. This step helps in recognizing the probable losses and their reasons. To implement efficient risk management process, the project members must have an overall perspective about the software developmental project. Risk assessment is done to establish the chances for potential loss occurring if the risk actually materializes (Jones 1994).
The next step is the mitigation step which involves development of a risk avoidance plan; following which is the last step responsible for execution of the both the risk mitigation and risk avoidance plans. These steps pave the way for a thorough description of all the risks. The risks are all documented in a Risk List.

The list must have all the risks including definition, likelihood, consequence, indicators, risk ranking, contingency plan and mitigation strategy (Boban et al. 2003). Creating a risk database does not necessarily involve technology. You can even use index cards, although it would mean that functions like searching, sorting, and linking would become a challenge and may lead to errors. Risk lists may be implemented effectively using Microsoft Excel or even Microsoft Word. We were able to implement it effectively using Microsoft Project. 

Another risk management deliverable we used is Risk Status Reporting. This should function at two levels-external and internal. In case of IT operations, it operates at the internal level, and here risk status reports must consider four probable risk management situations for every risk. The four possible situations are resolution, contingency, valiance, and changeability.  Risk reporting includes recording, collecting, and reporting various risk assessments.
It is important to monitor the results and assess the competence of existing plans. Risk reporting help in providing a foundation for assessing the project updates. As risk reports are formal records they ensure that the risk assessments are comprehensive. Although it requires continuous planning and supervision, this approach can enable the risks to be alleviated in the beginning phases of software development when costs for such software projects are still low.

We developed these risk management deliverables after we formed the objectives and activities of the project. The project life cycle includes a step where potential challenges are identified and a contingency plan is developed. This is where we conduct a risk assessment and reporting and use deliverables like risk lists and risk status reports.

For the implementation of the risk management to be successful, the organization defines management roles for the project. Specific project members must be appointed whose foremost activities are related to risk management of the software development project. It is their responsibility to constantly identify risks and activities related to risks.
All the project stakeholders share the responsibility for risk management. However, the Project Direction is the one who decides whether to move forward with the mitigation strategies and implement contingency places. This is especially true for cases which have requirement of additional costs.

The solution to efficient risk management lie is the identification and mitigation of true risks and formation of a contingency strategy if the potential risk develops into a reality (Charlotte 1989).

  • Boban, M., Požgaj, Z. and Sertic, H. (2003). Strategies for Successful Software Development Risk Management. Management, 8 (2), p. 77-91.
  • Charette, R. (1989): Software Engineering Risk Analysis and Management. New York: McGraw Hill.Jones, C. (1994): Assessment and Control of Software Risk. New York: Prentice Hall.
  • Kwak, Y.H. and Stoddard, J. (2004). Project risk management: lessons learned from software development environment. Technovation, 24, p. 915-920

Elad Shalom,
CTO at

Sunday, February 3, 2013

Possible Risks and their Impacts on Software Projects

Discuss three possible risks that may arise on software projects. Determine what would be their impact and how can they be addressed

 Possible Risks and their Impacts on Software Projects

Risks are uncertain events of the future with a probability for occurrence and a potential for loss. For software projects, risk identification and management are primary concerns. Proper analysis of these risks will help in effectively planning and assigning work for the project. 

One of the major reasons for project failures is the presence of multiple risks in the software project environment. Software projects are a collection of larger programs that have several dependencies and interactions. The projects involve creating something that has not been previously done, even though the developmental process may be similar to some other projects. Therefore, software developmental projects have a variety of quality and usability problems, amongst others (Kwak & Stoddard 2004). It has been found that the different kinds of risks will affect user satisfactions, system performance, and budget (Jiang and Klein 1999).

Project managers require good tools to assess and manage these software project risks, in order to reduce the increasing rate of failure of software projects (Wallace, Keil, and Rai 2004). It has been suggested that software project managers must identify and control these risk factors to decrease the chances of failure (Karolak 1996). 

There are several types of risks that may arise in software projects. 
Here we look at three possible risks:

The Schedule Risk is a very complicating factor because it is not easy to estimate schedules accurately and consistently (Abdel, Sengupta, & Swett 1999). Generally, organizations begin with a large project without properly comprehending its size and complexity. This is a huge risk which leads to problems in correctly scheduling the project. However, performance with scheduling risk gets better with project experience (Ropponen and Lyytinen 2000). The most significant impacts of schedule risks or slipped schedules are usually changed scope, compromised quality, and increased project cost. Those companies that focus on time-to-market are most severely affected by this type of risk. Being behind on a market window by even a week or a month can disrupt profitability and demolish the market share. For a small company even a single missed opportunity could result in shut down, and for larger companies it could have negative long term consequences. Schedule risks can also impact the internal functions of companies by decreasing the turnover. All these problems can be avoided by keeping the project schedules on track and ultimately lowering project risks (Angotti & Greenstein 1999).

Requirement Inflation is another significant risk factor for software projects. As the projects progresses further, several features come up which were not recognized at the start of the assignment. This generally happens because it is difficult and time consuming to gather and record all the necessary details from prospective users. The result is that project team does not know what the requirements of successfully completing the project are (Martin et all. 1994). This raises the possibility a system that cannot be used because the system analysis to build an accurate and complete set of requirements has not been done (Addison & Vallabh 2000). 

People Risk arises from inadequate managerial and technical skills (McLeod & Smith 1996). The project personnel may not possess the required knowledge about the business and technology, and may not have adequate experience to manage the project (Keil et al. 1998). Inadequate knowledge and skills has a huge impact on the outcome of the project.
To begin the risk management process, first the risk needs to be identified so that appropriate measures can be taken to counter it (Schmidt et al. 2001). Software risk management is an overwhelming responsibility. However, it is effectual and necessary to reduce the failure rate of the projects. Organizations that have identified risks and implemented risk management processes for their software projects have been successful (Kwak & Stoddard 2004).

  • Sengupta, Abdel-Hamid, Swett, K. and T.K., C. (1999). The Impression of Targets on Software Plan Organization: An Investigational Study. MIS Quarterly, 23 (4), p.531-555.
  • T. and Vallabh , S. , Addison (2002). Calculating Software Project risk- an Experimental Study of Methods used by knowledgeable Project Managers. Proceedings of SAICSIT, p.28 – 140.
  • Jiang, J.J. and Klein, G. (1999). Risks to different aspects of system success. Information and Management, 36 (5), p.263–272
  • Karolak, D.W. (1996). Software Engineering Risk Management. Los Alamitos, CA: IEEE Computer Society Press.
  • Keil, , Cule, Lyytinen, K. And Schmidt, R., P.E., M. (1998). A Structure for Identifying Software Plan Risks. Connections of the ACM, 41(11), p.77-83.
  • Y.H., Kwak, and Stoddard, J. (2004). Project Risk Organization: Lessons well-read from software Expansion surroundings. Technovation, Vol. 24, p.915-920.
  • Hoffer,Martin, , Dehayes, E.W.,  D.W., Perkins and J.A., W.C. (1994). Administrating Information Knowledge: What the Managers Need to Know.2nd Edition, New Jersey: Prentice Hall.
  • Mcleod, G. and Smith, D. (1996). Managing IT Projects. Massachusetts: Boyd and Fraser Publishing.
  • Ropponen, J., and Lyytinen, K. (2000). Components of Software Development Risk: How to Address Them? IEEE Transactions on Software Engineering, 26 (2), p.98-111.
  • Schmidt, R., K., M., Cule, P.and Keil, Lyytinen  (2001). Identifying Software Development Risk: A Global Delphi Study. Periodical of Association Information Systems, 17(4), pp. 5-36.
  • Rai, Keil, A., L., M. and Wallace (2004). How Software plan Risk Affects Project act: A research of the magnitude of Risk and An Exploratory Model. Decision Sciences, 35 (2), US

Elad Shalom,
CTO at

Dependability in Open Source Development

Open Source development involves making the source code of a system publicly available. This means that many people can propose changes and improvements to the software. Analyze the dependability issues surrounding the process of Open Source development.

Dependability Issues Surrounding the Process of Open Source Development
‘Open Source’ is a term used to describe software development projects (Arief et al. 2002). Projects that are significantly different and possess different characteristics are called open source projects (Lawrie et al. 2002). Some examples of projects that are open source are operating systems, web and mail servers, and developmental tools. These examples point towards the formation of a community that can create software that is claimed to be very dependable (Lawrie and Gacek 2002). 

Because Open Source development involves sharing the source code of a system, there have been issues regarding its dependability. Dependability is a relatively broad term which includes security, reliability, availability, and safety (Randell 2000). There have been several arguments about the dependability of Open Source Software development. Many suggest that Open Source is more protected because it provides its source code to all, including intruders, which is a challenge to the basic intuition (Bosio et al. 2002).

Systems where you can trust the services the system provides, with absolute justification, are known as ‘dependable systems’ (Reis et al. 2002). Neumann (2002) says that trust and trustworthiness are two different things. Trust may be present without any proof to justify the confidence in a specific system, while trustworthiness emphasizes the presence of assurance criteria that justifies the confidence in the system. A dependable computer system is one which possesses qualities like reliability, availability, and security (Lawrie & Jones 2002). The Open Source Software is vulnerable to attacks by sharing of altered versions of the software systems. This is a potential problem that raises the question of trustworthiness of the software system. 

A major issue for dependability in Open Source development concerns the necessity of research based evidence that would declare what attributes of Open Source Software and Non-Open Source Software can aid in assuring the dependability of the software products produced. Due to the public and open nature of the Open Source Development process, the privacy barriers of access for influence and involvement in the process are lowered (Lawrie and Gacek 2002).

There needs to be comparative research done to determine the benefits of introducing formal software engineering initiatives into Open Source projects in order to determine if programs like CHATS (Composable High Assurance Trusted Systems) have been successful in increasing trust in open source software products.
Instead it can be that the introduction of such software engineering methods, tools, and techniques may only give an impression of products being more dependable, rather than actually increasing dependability (Murphy and Mauhgan 2002).

Another consideration for dependability is the nature of the products that can be successfully developed in the process of open source development. Dependable system software like operating systems, developed by Open Source Software processes are seen as a prerequisite for further building and creating dependable and trustworthy systems (Neumann 2002). 
Therefore, the open source process may actually be the most effective development approach for completing a dependable system in IT infrastructures or in cases where high levels of dependability are required for initial system deployments, for e.g. safety critical systems (Bosio et al. 2002). 

Lawrie and Gacek (2002) establish that although Open Source Software products are generally limited to only developing system oriented software; these systems are essential to further build up dependable and trustworthy systems.  Due to the growing scope and complexity of software, its trustworthiness has become a major issue. Central to developing trustworthy software is software fault tolerance. Software that is trustworthy is always stable. 


  • Arief, B., Bosio, D., Gacek, C. and Rouncefield, M. (2002). Reliability Issues in Open Foundation Software-DIRC Project Activity 5 Final Report. Technical Report CS-TR-760.
  • Bosio, D., Newby, B., Strigini, L. and Littlewood, M.J. (2002): Advantages of Open Foundation Processes for Dependability:  Clarifying the issues. In Proceedings of the Open Source Software Development Workshop, Newcastle up on Tyne, UK, p. 30-46.
  • Lawrie, T., Arief,B. and Gacek, C.  (2002):  Interdisciplinary Insights on Open Source. In Proceedings of the Open Source Soft- ware Development Workshop, Newcastle up on Tyne, UK, p. 68-82.
  • Lawrie, C., T. and Gacek (2002). Issues of Reliability in Open Foundation Software Development. Software business Notes, 27 (3). P. 34-36.
  • Lawrie, T. and C, Jones. (2002): Target-Diversity in the plan of trustworthy Computer-Based system. In Proceedings of the Open foundation Software Development Workshop, Newcastle up on Tyne, UK, p. 130-154.
  • Murphy, R. and Mauhgan, D. (2002): Trusted Open Source Operating Systems Research and Development. In Proceedings of the Open Source Software Development Workshop, Newcastle up on Tyne, UK, p. 20-29.
  • Neumann, P. (2002): Developing Open Source Systems: Principles for Composable Architectures. In Proceedings of the Open Source Software Development Workshop, Newcastle up on Tyne, UK, p. 2-19.
  • Randell, B.  (2000): Turing Memorial Lecture: Facing up to faults. Computer Journal, 43(2), p.95-106.
  • Reis, C., Pontin, R. and Fortes, M. (2002): An Overview of the Software Engineering Process and Tools in the Mozilla Project. In Proceedings of the Open Source Software Development Workshop, Newcastle up on Tyne, UK, p.  155-175.

Elad Shalom,
CTO at