Subscribe by Email


Showing posts with label Development. Show all posts
Showing posts with label Development. Show all posts

Sunday, May 11, 2025

What Is the Real Goal of Software Development? A Simple Guide for Everyone

 Introduction: Why Do We Build Software?

You use software every day—even if you don’t think of it that way. Sending a message on WhatsApp? That’s software. Checking the weather on your phone? Software again. Logging in to your bank account online, buying groceries with an app, or even playing a game—software makes it happen.

But what’s the real goal of software development? Is it just to make something cool? Or is it about solving a problem?

At its core, the goal of software development is simple:
To create tools that help people do things better, faster, and more efficiently.

Let’s break that down in a way anyone can understand.


1. Solving Problems: The Heart of Software

Every piece of software starts with a problem. Maybe it’s hard to book a taxi quickly. Maybe you need a way to organize your budget. Maybe doctors need a system to check patient records.

Software is created to solve that specific problem.

Example:
Before ride-sharing apps like Uber or Ola, finding a taxi was a pain. You had to call someone, hope they’d come, or wait on the street. These apps solved that problem by letting you book a cab with a few taps. You know where the cab is, when it’s arriving, and what it will cost.

Software solves real problems like this.


2. Making Life Easier and Faster

Another goal of software development is making tasks easier. You might be able to do the task manually—but software helps you do it faster, more accurately, and often with less effort.

Example:
Think about spreadsheets. You could do all your budget calculations on paper with a calculator. But Excel or Google Sheets does it in seconds. You enter numbers, and it adds, multiplies, and even draws charts for you.

That’s software making life easier.


3. Automating Repetitive Work

Nobody likes to do the same thing over and over again. That’s where software really shines—it can automate boring or repetitive tasks.

Example:
A shop owner used to write down all customer invoices by hand. Now, billing software generates them automatically. Instead of spending hours, they can finish in minutes. The same applies to payroll software, email campaigns, and even social media scheduling tools.

The goal here? Save time and reduce errors.


4. Making Communication Better

Communication has changed a lot thanks to software. The goal is to help people stay connected—whether it’s across the street or across the globe.

Example:
Zoom or Microsoft Teams let you talk to your team, attend meetings, or even study online. Without such tools, the world would’ve stopped during the COVID-19 pandemic.

Software has made communication instant, global, and reliable.


5. Creating New Opportunities

Sometimes, software opens doors we didn’t even know existed. It creates new business models, jobs, and services.

Example:
Think of YouTube. Before it existed, there was no platform where ordinary people could upload videos and become global stars. Now, YouTubers earn a living by sharing their talents. Entire industries—like food delivery, online education, and streaming—exist because of software.

Software development helps people create, earn, and grow.


6. Supporting Businesses and Teams

From startups to large corporations, every business runs on software. The goal is to manage work better, serve customers faster, and make smarter decisions.

Example:
Customer support systems track all complaints and feedback in one place. Accounting software keeps the finances clear and tidy. CRM tools (Customer Relationship Management) help businesses follow up with clients.

The right software helps businesses run like clockwork.


7. Empowering Users with Control and Access

Software also gives people more control. You can track your own health, manage your finances, and even learn new things—anytime, from anywhere.

Example:
A fitness app tells you how many calories you burned today. An educational app lets you learn photography from a teacher in Europe while you’re sitting in India.

Software puts power in your hands.


8. Making the World More Inclusive

Another powerful goal is inclusion—making services available to everyone, including those who might be left out otherwise.

Example:
Speech-to-text software helps those with hearing impairments. Navigation apps help the visually impaired walk safely. Government apps let people in remote villages access information and apply for benefits without traveling far.

Software can be an equalizer, giving more people access to opportunities.


9. Evolving Continuously

Software is never “done.” Good developers always try to improve it—based on what users need, new technology, or better methods.

That’s why you see updates regularly. These updates fix bugs, add new features, or improve speed.

The goal is not just to build software—but to keep it useful over time.


10. Keeping Data Secure

We live in a digital world. Another big goal of software development is to keep user data safe—whether it’s passwords, money, or health records.

Good software is built with security in mind. Developers use encryption, firewalls, and testing tools to make sure users stay protected.

This is especially important in banking, healthcare, and communication apps.


Recap: 10 Goals of Software Development

Here’s a quick summary of everything we covered:

  1. Solve real-life problems

  2. Make tasks easier and faster

  3. Automate repetitive work

  4. Improve communication

  5. Create new opportunities

  6. Help businesses run better

  7. Empower users with access

  8. Include everyone, even in remote or underserved areas

  9. Continuously improve and adapt

  10. Keep user data safe and secure


So… Is Software Just for Techies?

Not at all. In fact, the best software is built for ordinary people—not just engineers or coders. Developers may write the code, but the software is for you.

That’s why many developers work closely with designers, writers, and users. They want to know what people need, what problems they face, and how to make things easier for everyone.

When non-tech people understand the goal of software, they can give better feedback, make smarter choices, and even come up with amazing ideas.


Final Thoughts

Software development isn’t just about typing lines of code. It’s about making lives better. Whether you're running a business, teaching students, managing your home, or simply watching videos online, software touches your life in small and big ways every day.

The next time you use an app or visit a website, think about this:
Someone, somewhere, built that tool to help you do something better.

That’s the true goal of software development.


Helpful Links for Curious Readers


Wednesday, August 7, 2019

Coordination with External Teams – Why Regular Meetings Matter

Sometimes, when I review the posts I write, I wonder—why even bother documenting something so obvious? Surely everyone already knows this, right? But then real-world experience kicks in. Time and again, I come across situations where professionals, even experienced ones, fall into issues that were already covered in one of these posts. That’s when I realize the importance of capturing even the seemingly obvious practices.

The goal of this post isn’t to restate the basics but to help individuals reflect on their processes. If you're doing something better than what’s mentioned here, I would genuinely appreciate it if you shared it in the comments. These insights help all of us grow.


📌 The Reality of External Coordination

For any team—especially those working on product development—it is inevitable that you will need to work with external parties. These could be:

  • Internal teams within your organization that depend on your deliverables or supply essential components.

  • External vendors or partners—third-party developers, marketing agencies, manufacturers, etc.

Let me give you an example. Our marketing team once struck a deal with a phone manufacturer to preload our app on their devices. At first glance, this seemed straightforward—just give them the APK and you’re done. But the reality? Far more complex.

We had to integrate special tracking parameters to monitor usage statistics:

  • How often the app was used if preloaded

  • How it compared to installs from other sources

This required not just technical changes, but intense coordination. And it’s one of the many examples where assuming things will “just work” can lead to missed deadlines or poorly tracked deliverables.


🛠️ Challenges in Cross-Organization Coordination

When you're dealing with external teams, one big mistake is assuming their work culture and structure mirrors yours. This assumption can be costly.

You need to:

  • Clarify deliverables

  • Map roles and responsibilities

  • Track timelines accurately

  • Define escalation paths

Communication gaps, time zone issues, different management styles—these can all derail a project if not actively managed.


✅ Best Practices for Effective External Coordination

Here are some core practices to adopt when managing collaborations with teams outside your organization:

1. Define Clear Responsibilities

Start by identifying stakeholders on both sides:

  • Who owns which part of the work?

  • Who is the decision-maker?

  • Who handles testing, approvals, or rollbacks?

Have a contact matrix or ownership chart. Ensure it's documented and shared.

2. Establish Clear Communication Channels

Create dedicated channels for formal communication:

  • Email threads with clear subject lines

  • Slack or Teams channels for informal queries

  • Project management tools (like Jira or Trello) to track progress

Avoid mixing multiple discussions in a single thread—it leads to confusion.

3. Set Regular Meetings

Regular sync-ups are crucial. These meetings help:

  • Resolve roadblocks early

  • Ensure accountability

  • Track action items and outcomes

Depending on the project phase, these could be:

  • Weekly status meetings

  • Daily standups (during integration or release phase)

  • Ad hoc calls for urgent issues

4. Phase-Wise Role Adaptation

In the early stages, marketing, legal, and business development people might be heavily involved. As you transition into development, QA and release engineers take over. Ensure that:

  • The right people are in meetings

  • Transitions are smooth

5. Track Deliverables and Dependencies

Have a shared tracker (Excel, Notion, Jira, etc.) that both teams update. Include:

  • Milestones

  • Deadlines

  • Blockers

  • Review comments

Maintain visibility. Transparency prevents finger-pointing.

6. Issue Management and Escalations

Not all issues can be resolved at the same level. Define:

  • What constitutes a blocker

  • Who gets informed

  • Expected resolution times

Escalation should be a process, not a panic button.

7. Define Acceptance Criteria Early

To avoid disputes, both parties must agree on what “done” means. Define:

  • Functionality expectations

  • Performance benchmarks

  • Test coverage

  • User acceptance testing (UAT) criteria


💡 Tailor Your Process, But Keep the Structure

While the steps above are generic, the application of each depends on:

  • Team maturity

  • Nature of the partnership

  • Project complexity

A lightweight integration project with an external CMS vendor may not need a full-blown steering committee. But a core integration with a payments processor? That absolutely needs structured touchpoints.

Create templates for:

  • Kickoff checklists

  • Weekly status updates

  • Risk registers

  • Communication protocols

These documents become lifesavers during escalations.


🚫 What Happens When You Don’t Coordinate?

Let’s revisit the pre-installation app example. Suppose we had:

  • Skipped UAT

  • Failed to add tracking parameters

  • Assumed marketing had done the heavy lifting

The result? A product on millions of devices with:

  • No user insights

  • No uninstall metrics

  • No feature usage stats

In a data-driven world, this is a disaster. And entirely avoidable.


📝 Wrap-Up: Coordination Is Not Optional

Working with external teams—be they partners, clients, or vendors—is inevitable. How you manage that collaboration defines whether your project succeeds or drags into chaos.

So don’t assume. Don’t delay. Build coordination into the DNA of your process:

  • Communicate clearly

  • Document rigorously

  • Meet regularly

When done well, coordination becomes invisible—just like the best-run projects.


📚 Amazon Books for Further Reading


🎥 YouTube Video on Cross-Team Coordination


Challenges of Working With an External Design Team





Sunday, June 30, 2013

Dot / Patch release: Estimating the features where there is a change required

One of the previous posts I wrote about was that of a Dot / Patch release (Estimation of dot / patch release) where there was an effort to show what are some of the reasons for a dot release or a patch release and the broad level points about how to estimate the amount of overall effort needed as well as the generation of schedule needed for the release. This needs to be done even if you already have a hard date for the release of the patch / dot release, since if the generated schedule is going far beyond the required date, then there is something wrong which would need to be solved (possibly by adding more resources, although that cannot be done in every case).
This post talks about actually doing an estimation of the changes required by working on the different features to see the overall impact of the change. Consider a case where a dot release has to be made for the application, with a change made in one of the features for a major defect. Now, the dot release has to be made in a couple of months, with the exact release date being decided based on the estimates for the different areas of the release (development effort, testing, localization, etc). One of the starting points for this estimation effort is figuring the effort needed for development, and for that, there needs to more detailed investigation of what areas need development effort.
How do you do this ? The first and most important starting point is to ensure that you have an experienced developer to do the investigation and estimate. The actual effort can be done by somebody in the development team, but the investigation should be done by a senior and experienced developer. How do you start:
- First, make sure that the developer has been given details of the change needed in terms of requirements
- Give the developer some time to discuss with other team members about the areas which are impacted.
If this is a core area, then the developer would need to go across the application to determine the impact. For example, there may be a need in the user security framework, and that would spin across all the application. If the change is localized to a specific area, then it is easier to make the effort estimate of the change. There is no great complexity in this area, as long as the developer does a thorough job.
- The senior developer should spend time with the actual developer who is going to work on the release and make sure that he has enough understanding of the changes required, and depending on the skill of the developer assigned, make sure that the correct estimate is given (which may included a required buffer).
- The developer also needs to ensure that he has spent time with people from the localization development team and the installer / release teams so that he has an idea of the amount of time needed from their side and can also include these in the effort estimate.
- The developer needs to spend a fair amount of time with the testing team so that they have a good understanding of all the changes that are going to happen in the release as well as the actual impact of the change including all the areas of the application that are impacted. 


Monday, May 6, 2013

Open up the feature planning and tracking to the team for improvements ..

In the previous post on this topic (Risk planning by looking at confidence level of estimates), I talked about the difficulties posed due to discrepancies between the original estimates done at the time of planning, vs. the actual efforts spent on the work when the features are in development. Tracking these discrepancies and working out some kind of logic about these discrepancies is very important for the project manager. Typically, the estimation is done by the senior folks on the team while the person who works on a specific feature can be anyone on the team (if the work is specialized, then it would be allocated to somebody who has more experience in that area, but that person could still be different from the person who did the actual estimation).
So what do you do ? Well, you would typically track the estimation against the effort, and try to figure out whether there is some kind of logic. Sometimes, there is a possibility of generating some logic to figure this out. We had a case whereby the person who preparing the estimates had a person situation during the time that he was working on the estimations which was very distracting (but not enough that the Project Manager could figure out that here was a problem). So, soon after the actual work started, some data analysis about the discrepancies resulted in figuring out that the actual effort was about 25% more than the estimates where this particular person was involved, and this helped us in re-estimating the remaining features and also making a decision that a particular feature needed to be stripped down, the scope of the feature reduced for getting everything in the time frame.
However, what was interesting was that this logic was actually seen by a member of the team who pointed this out, and though it seemed a bit strange at first, more analysis helped. Some speaking to the people who had already worked on the features that this person estimated tended to confirm that they believed that the features were under-estimated. What this incident showed was that this provided us (the leads and the project manager) that we should get the team more involved with the risk planning and analysis that we do, such that sometimes it would be possible for them to identify a pattern that we might miss. There were other smaller cases that we could see.
One problem that was told to us before we started was that it was supposed that this would distract the team members from their regular work, and, many of them might not be interested in being involved in such kind of analysis. We did find people who were not interested, but we did start out with the entire team, making them involved with the process we did for identification of risk factors, gathering the data and then doing the analysis. About the time it took to do this, we figured that overall this would take an hour per week, but getting team members involved with stuff such as this would help give them a better perspective on some important processes that team leaders go through and when they could see the problem, they had a closer perspective on some of these risks and could figure out solutions for a percentage of these cases faster than we could. This helped save time overall, and after a couple of rounds, the people who elected to remain involved were appreciative of the learning they got from such exercises.


Sunday, April 14, 2013

What are different components of operating system? – Part 5



10. Networking: 
- Presently, many different types of protocols along with applications and hardware for using them are supported by most of the operating systems. 
The computers running different operating systems can easily become a part of the network and participate in resource sharing activities such as sharing of printers, scanners and of course files. 
- They may connect to network either through wired connection or a wireless one. 
- Through this component, OS of one computer system can access the resources of another computer system at a remote location. 
- Almost anything can be shared including remote computer’s sound and graphics card. 
- There are network services that allow transparent access to a computer’s resources like SSH.
- Client/ server networking allows a system to connect to a server which offers various services to its clients.
- Numbered access points or ports are used for providing these services beyond the network address of the server. 
- Maximum one running program can be associated to each of the port number which will handle all the requests.
- A user program can then access the local hardware resources by making a request to the kernel of its operating system.

11. Security: 
- The security of a computer system depends on the proper working of the technologies involved in it. 
- A modern OS lets you access many resources both for the internal applications and external devices through the kernel. 
- The operating system must have the capability of deciding which requests are to be allowed and which are to be rejected. 
- There are some systems which decide this on the basis of some requester identity which they further classify as privileged or non–privileged. 
- For establishing the identity there has to be some form of authentication. 
Mostly a username or ID is quoted along with a password. 
- Some other methods are bio-metric data or magnetic cards. 
- There are cases where no authentication is required for accessing resources. 
Another concept under security is the authorization. 
- The services which can be accessed by the requester are tied to his/ her account or to group in which he/ she belongs. 
- Some auditing options are also offered by systems having a high security level. 
- With these the requests from various sources can be tracked. 
- Internal security is also essential from the programs that are currently executing. 
- It is maintained by raising interrupts to the kernel of the OS. 
- If direct access is provided to the programs for hardware, then security cannot be maintained.

12. User Interface: 
- For an operating system to be operated by a user, a user interface is required. 
- This interface is termed as the shell and is vital if the operating system is to work according to the user’s instructions. 
- The directory structure is observed by the user interface and then it sends requests for the services wanted by the OS from the input hardware devices. 
- It also sends requests to the operating system demanding for the status messages and prompts to be displayed on output hardware devices. 
- There are two most common forms of the user interface namely the GUI (graphical user interface) and the CLI (command line interface). 
In the former, the environment is visual and in the latter case the commands have to be typed line by line. 
- Almost, all the modern operating systems come with a graphical user interface.
- In some operating systems such as Mac OS, the GUI is already integrated in to its kernel. 
- There are some operating systems that are modular in nature.  


Saturday, April 13, 2013

What are different components of operating system? – Part 4



8. Disk access and file systems: 
- Through this component of the operating system, the users as well as the programs they use are able to sort and organize files on a computer system. 
This is done through the use of folders or directories. 
- This is another central feature of the operating systems. 
- Data is stored by the computers on disks in the form of files which are then structured in some certain predefined ways so as to enable faster access, better use of available memory and higher reliability. 
- These specific ways of the storing the data on disk together constitute the file system. 
- This makes it possible to assign names and attributes to the files. 
- This in turn helps in maintaining the hierarchy of directories and folders in the directory tree. 
- Only one type of file system and one disk drive was supported by the old operating systems. 
- Those file systems had a limited capacity in terms of directory structures and file names that could be used and speed. 
- These limitations were actually the reflection of the operating system’s limitations making it difficult for it to support multiple file systems. 
- There are other simple operating systems that have a limited range of storage accessing options.
- On the other hand Unix and Linux like operating systems support the VFS technology or virtual file system. 
- Unix offers support for a wide range of storage devices irregardless of their file system or design. 
- This enables them to be accessed via a common API (application programming interface). 
- For programs this avoids the need of having knowledge about the devices which they may require to access.
- With virtual file system, the OS can provide access of unlimited devices to the programs having many file systems operating in them.
- This it does through the use of file system drivers and other device drivers. 
- A device driver lets you access a connected storage device such as flash drives. 
- Every drive has a specific language which is only understood by the device driver and it translates it in to a standard one used by the OS for accessing the drives.
- The contents of the drive can be accessed by the kernel only if the device driver is in place. 
- The purpose of the file system driver is to translate the commands used for accessing the file systems in to the standard set recognized by the operating system. 
- These file systems are then dealt by the programs based on their directories or folders and file names organized in hierarchy. 
- These files can be created, deleted or modified by the programs.

9. Device Drivers: 
- This specifically developed computer software enables the interaction with the hardware devices. 
- It creates an interface through which communication can be done with the device via communications sub system or bus i.e., the means through which the hardware is connected to the system. 
- This computer program depends up on the hardware but is also specific to the operating system.
- It enables an applications software package running under kernel in order to interact with the hardware device transparently.
- Further, it raises the requisite interrupt required for handling asynchronous time dependent interfacing needs of the hardware. 
- Abstraction is the key goal of the device drivers.
- Every hardware model is different and so the operating system cannot know about how each device will be controlled. 
- The way the devices should be controlled is now dictated by the operating systems as a solution to this problem. 
- Therefore, translating the function calls from the OS into calls specific to the device becomes the purpose of the device drivers. 
- Any device would function properly if a device driver suitable to it is available which will ensure the normal operation of the device from the viewpoint of the OS. 

Read the next post "What are different components of operating system? – Part 5"


Thursday, April 11, 2013

What are different components of operating system? – Part 3



6.   Virtual memory: 
- This is that component of the operating system that lets it trick programs to use the memory that scattered all around the RAM and hard disk as one continuous memory location. 
- This chunk of memory is called the virtual memory.
- Using various means of virtual memory addressing such as segmentation and paging implies that it is up to the kernel for choosing the type of memory that might be used by the program at a given time. 
- This enables the use of same memory locations by the operating system for executing multiple tasks. 
- If a program makes an attempt for accessing memory that does not lie in the present range of memory that is accessible but has not been allocated, an interrupt is raised for the kernel in the same way as it would be done for the program if it would exceed its memory. 
- Such an interrupt is called as a page fault.
- When a page fault is detected by the kernel, the virtual memory range of the application is adjusted through which the access to memory is granted to it. 
- In this way, discretionary power is given to the kernel over the memory storage of a particular application. 
- The frequently accessed memory is stored temporarily on the hard disk or other types of media in today’s operating systems. 
- This is done to make some space for the other programs.
- This process is termed as swapping because a memory area is used by a number of programs and its contents can be exchanged or swapped with that of the other memory locations up on demand. 
- Virtual memory is a way of gaining a perception such that amount of RAM is larger than usual in the system.  

7. Multi – tasking: 
- Execution of multiple programs that are also independent on the same system is termed as multi–tasking. 
- It gives appearance that the tasks are being performed at the same time. 
There are computers that are capable of running at the most two programs simultaneously through the principle of time sharing. 
- A share of computer time is used by each of the programs for their execution. 
A piece of software known as the scheduler is contained in the operating system whose purpose is to determine how much time will be spent for the execution of every program, and in which order it would take place.
- It also determines how the control will be passed to the programs. 
- The kernel then passes the control to the process allowing the program to access the memory and the CPU. 
- Later on, through some mechanism the control is returned to the kernel so that CPU might be used by some other program. 
- This process of passing control between the application and the kernel is referred to as the context switch. 
- The concepts concerning the application preemption have been extended and used by the modern operating systems in order to maintain preemptive control over the internal run times. 
- The philosophy that governs the preemptive multi–tasking is to make sure that regular time of CPU is given to all the programs.

Read the next post "What are different components of operating system? – Part 4"


Wednesday, April 10, 2013

What are different components of operating system? – Part 2


All the components of an OS must exist in order so as to make the different parts of a system work together in cooperation. All the hardware needs of a software application are satisfied only through the operating system, be it as simple as mouse movement or as complex at Ethernet.


4. Modes: 
- A number of modes of operations are supported by today’s CPUs. 
- CPUs having the capability to support multiple modes at the least use the following two basic modes:
Ø  The supervisor mode: The kernel of the operating system uses this mode for accomplishing low level tasks i.e., the tasks that have no restricted access to the computer hardware. Few examples of such tasks are: communicating with devices such as the graphic cards and maintaining a control over the read, write and erase operations of the memory.
Ø  The protected mode: This mode is just the opposite of the previous one and so is use for everything else. Application software which runs under the protected mode can access the computer hardware only by going through the kernel which in turn controls the tasks only in supervisor mode.
- There are other types of modes similar to the above two, such as virtual modes that might be used by the CPUs for emulating old types of processors such as the following: 32 bit processor on a 64 bit processor or a 16 – bit one on a 32 bit one etc.
- The former one is the mode in which a computer runs automatically after start up. 
- The first programs that run include EFI or BIOS, OS and the boot loader. 
These programs require unlimited access to the computer hardware because a protected environment can only be initialized outside of one. 
- The CPU is transferred to protected mode only when the control is passed to some other program by the operating system. 
- When a program is running under the protected mode, the number of CPU instructions to which it is granted access might be very limited. 
- If a user wishes to leave the protected mode, he / she can raise a interrupt that will pass the control again to the operating system. 
- This is how an operating system maintains an exclusive control over the issues concerning access to the memory and the hardware. 
- One or more CPU registers that consist of information that is prohibited to be accessed by the currently executing program are collectively termed as the ‘protected mode resources’. 
- If at all an attempt is made for altering this resource, the system switches to supervisor mode.
- Now it is the OS who deals with such illegal operations. It may kill the program.

5. Memory management: 
- Kernel of a multi–programming OS is held responsible for the management of the system memory that is currently under use by the programs. 
- Kernel ensures that the programs under execution do not interfere with the memory being used by the each other.
- Since time sharing principle is followed; each of the programs is given an independent access to the system memory. 
- Early operating systems used to have a cooperative memory management system. 
- It was assumed that all the programs used the memory manager of kernel voluntarily without exceeding the memory limit assigned to them. 
- However,this system of memory management is extinct now because there are bugs in programs that cause them to exceed the limits.
Failure of a program might cause the memory to be overwritten by the other programs using it. 
- Memory being used by the other programs might be altered by some viruses or malicious code for some purpose which in turn may affect the working of the OS. 
- In such management, the misbehaving programs can crash the whole system. 
- Kernel limits the access to memory for various programs through various methods of memory protection such as paging and segmentation. 

Read the next post (What are different components of operating system? – Part 3)


How not to put all your eggs in one basket - ensure some amount of knowledge sharing

Every software team has some great engineers, and some not so great ones. Typically, when you are doing a project, you would take your list of features and categorise them into a breakdown of how difficult or how easy they are. Once you have done this, you would get into the process of estimating these tasks and assigning them to different members of the team. One somewhat hidden portion of this entire exercise of estimation and assignment is that you would do the estimation based on a specific person to do the feature (for a more complicated feature, the person who is more skilled would take far less time than the mediocre engineer in your team - and every team has bright people and mediocre people). What this results in is that you have already done some amount of thinking about whom to assign to the more difficult features - typically you would look at the total time available for your skilled people and assign accordingly.
So, you have started the entire exercise of getting the work done, with different parts of the project assigned to different people and are tracking progress of these tasks. This is where I tell my example where some amount of bad planning and some assumptions landed the team in some serious trouble - we had some critical features that were very important for the ongoing version of the product, with marketing depending on these features to be the ones that would get highlighted in reviews, and these were the ones that were pulled up in stakeholder status and review sessions. In fact, part of discussion with stakeholders was about assignment of these features, and once we had presented the assignment of these features to the more skilled engineers, there was approval to proceed.
Now, how many of you have heard of Murphy's Law ? If something can go wrong, it will. And guess what, it did. We were deep into task assignment, and since the engineer doing the most complicated feature was skilled but a bit moody, we had neglected to do periodic sharing of the code and design with a buddy engineer. And of course, we ran into an issue where the engineer ran into some sudden health issues with his daughter, which caused him to take leave for a period of 2 weeks, causing him immense emotional stress. It landed us in a tricky situation. Given that he was already stressed and was already out to take care of his daughter, any kind of information sharing was very difficult. We had permission to drop a couple of features so that we could divert some engineers to this sudden issue, but the information sharing was proving to be a big handicap.
We asked the replacement engineers to study the code intensively, but it did cause us a 6 day delay in delivery of the important features, and also caused a higher number of bugs to turn up since these replacement engineers did not have the same level of familiarity with the code. You would not want to know about the reaction of stakeholders to this delay, especially since these features were critical. And, the biggest problem was that we had a process whereby we used to get engineers to share their current status with their buddy, but for tight-scheduling reasons, we had gone a bit slow on the buddy program; and that was what hit us the most.


Tuesday, April 9, 2013

What are different components of operating system? – Part 1


All the components of an OS must exist in order so as to make the different parts of a system work together in cooperation. All the hardware needs of a software application are satisfied only through the operating system, be it as simple as mouse movement or as complex at Ethernet.

1. Kernel: 
- This component of the OS is the medium through which application software can connect to the computer’s hardware system. 
- This component is aided by many device drivers and firmware. 
- With the help of these, it provides a very basic control level for all the hardware devices of a system.
- For programs, the memory access in RAM and ROM is managed by kernel only. 
- It is the authority of the kernel to decide which program should get what access and at what level. 
- The operating state of the CPU is set up the kernel itself all the time. 
- It prepares the data to be stored in long term non–volatile storage like in flash memory, tapes, and disks and so on.

2. Program execution or process: 
- The OS is actually an interface between the hardware and the application software.
- An interaction between the two can be established only if the application abides by the rules and procedures of the operating system as programmed in to it. 
- It is another purpose of the operating system to provide a set of services for simplifying the execution as well as development of the programs. 
- Whenever a program is to be executed, a process is created by the kernel of the operating system and then other required resources such as memory are assigned to it. 
- A priority is assigned to this process if it is a multi–tasking environment. 
- The binary code for the program is loaded in to the memory and the execution is initiated.

3. Interrupts: 
- This component is the central requirement of the operating systems. 
- This is so because the interrupts provide a way of interaction between the OS and its environment which is not only reliable but also effective. 
- The older operating systems worked with very small stacks and therefore watched for the various input sources requiring action for initiating some event (called polling).
- This strategy is not useful in today’s OS that use very large stacks. 
- Here, interrupt - based programming is fruitful. 
- The modern CPU have in – built direct support for these interrupts. 
- With the use of interrupts, a computer knows when to automatically save the contexts of the local registers or run some specific code in response to the occurring events. 
- Hardware interrupts are supported by very basic of the computers. 
Interrupts let the programmer to specify what code is to be run when a certain even occurs. 
- When hardware receives an interrupt, it automatically suspends the current program being executed by it. 
- The status of the program is saved and the code associated with the interrupt is executed. 
- In modern operating systems, the kernel is responsible for handling the interrupts. 
- Either of the running program or the computer’s hardware might raise an interrupt. 
- When an interrupt is triggered by a hardware device, it is left to the OS’s kernel to decide what to do now through execution of some code. 
- The code that has to be run is decided based up on the interrupt’s priority. 
The device driver is the software to which all the task of the processing hardware interrupts is assigned. 
- A device driver might be a part of the kernel, or some other program or both.


Tuesday, March 12, 2013

What are autonomic systems? What is the basic concept behind autonomic system?


In this article we shall discuss about the autonomic systems, but before moving on to that we shall see a brief discussion regarding the autonomic computing. 

About Autonomic Computing

- Distributed computing resources have the ability of self–management. 
- This kind of computing is called autonomic computing and such systems are called autonomic systems. 
- Because of their unique capabilities, these systems are able to adapt to the changes that are both predictable and unpredictable. 
- At the same time, these systems keep the intrinsic complexities hidden from the users as well as the operators. 
- The concept of autonomic computing was initiated by IBM in the year of 2001. - This was started in order to keep a curb on the growing complexity of the management of the computer systems and also to remove any complexity barriers that prove to be a hindrance in development.

About Autonomic Systems

- Autonomic systems have the power to make decisions of their own. 
- They do this because of the high level policies. 
- These systems automatically check and optimize their status and adapt to the conditions that have changed. 
- The frame work of these computing systems is constituted of various autonomic components that are continuously interacting with each other. 
Following are used to model an autonomic component:
  1. 2 main control loops namely the global and the local.
  2. Sensors (required self – monitoring)
  3. Effectors (required for self-adjustment)
  4. Knowledge
  5. Adapter or planner
- The number of computing devices is increasing by a great margin every year. - Not only this, each device’s complexity is also increasing. 
- At present highly skilled humans are responsible for managing such huge volume of complexity. 
- The problem here is that the number of such skilled personnel is not much and this has led to a rise in the labor costs.
- It is true that the speed and automation of the computing systems have revolutionized the way world runs but now there is a need for a system that is capable of maintaining these systems without any human intervention. 
- Complexity is a major problem of the today’s distributed computing systems particularly concerning their management. 
- Large scale computer networks are employed by the organizations and institutions for their computation and communication purposes. 
- These systems run diverse distributed applications that are capable of dealing with a number of tasks. 
- These networks are being pervaded by the growing mobile computing. 
- This means that the employees have to be contact with their organizations outside office through devices such as PDAs, mobile phones and laptops that connect through wireless technologies. 
- All these things add to the complexity of the overall network that cannot be managed by human operators alone. 
- There are 3 main disadvantages of manual operating:
  1. Consumes more time
  2. Expensive
  3. Prone to errors
Autonomic systems are a solution to such problems since they are self – adjustable and do not require human intervention. 
- The inspiration or the concept behind the autonomic systems is the autonomic nervous system found in humans.
- This self – manageable system controls all the bodily functions unconsciously. - In autonomic systems, the human operator just has to specify the high level goals and rules and policies that would guide the management. 

- There are 4 functional areas of an autonomic system:
  1. Self–configuration: Responsible for the automatic configuration of the network components.
  2. Self–healing: Responsible for the automatic detection and correction of the errors.
  3. Self–optimization: Monitors and controls the resources automatically.
  4. Self–protection: Identifies the attacks and provides protection against them.
- Below mentioned are some characteristics of the autonomic systems:
  1. Automatic
  2. Adaptive
  3. aware


Friday, March 8, 2013

What are benefits of agile process improvement?


Agile methodologies that we have today are a resultant of the experiences gained from the real life projects that were undertaken by the leading software professionals. These professionals were thorough with the challenges and limitations imposed by the traditional development methodologies on various projects. 

- The agile process improvement directly addresses the issues of the traditional development methods both in terms of processes and philosophy behind it. 
Agile process improvement provides a simple framework to the development teams suiting varying scenarios while focusing up on the fast delivery of the business values. 
- With all these benefits of the agile process improvements, the organizations have been able to reduce the associated overall risk with the development of the project. 
- The delivery of the initial business values is accelerated well by the agile process improvement. 
- This is achieved through a process of constant planning and feedback. 
- Agile process improvement ensures that the business values are maximized throughout the development process. 
- With the API’s iterative planning plus feedback loop, it becomes possible for teams to align the software process with the business needs as required. 
Another major benefit of the agile process is that the software development process can adapt to the ever–changing requirements of the process and business. 
- By taking a measure and evaluation of the status based up on the amount of work and testing done, visibility can be obtained to a more accurate value. 
- The final result of the agile process improvement is a software system that is capable of addressing the customer requirements and the business in a much better way. 
- By following an agile process improvement program, not only just deployable, tested and working software can be delivered on an incremental basis but also increased visibility, adaptability and values are delivered earlier in the software development life cycle. 
- This proves to be a great thing in reducing the risk associated with the project. 
- There are a number of problems with the traditional development methods. 
In a research it was found that the waterfall style development methodology was the major factor in the contribution of failure of the software. 
- Some other software could not meet the real needs. 
- They had the inability in dealing with the changing requirements and late integration. 
- All this has proven that the traditional development methods prove to be risky as well as a costly way for building software. 
- Thus the majority of the industry has turned towards agile development.
- There is a continuous feedback input from the customers and a face to face communication among all the stake holders. 
- The business needs associated with the agile process improvement are ever changing. 
- Organizations want quick results from what they invest. 
- They want their improvement programs to keep pace with these changing business needs. 
- The agile process improvement is composed of several mechanisms using which all this can be achieved. 
- Working iteratively lets you deliver the product before the deadline to the customer. 
- It lets you deliver only the things are actually required i.e., it does not let you waste your time on the un-required things. 
- Also, early and regular feedback from the customer lets you deliver the product with quality as desired by the customer.
- Agile projects are distributive in nature i.e., the work is divided among people. 
- Agile software development is still an immature process and there is a need for improving it for the betterment of the software industry. 
- Agile process improvement is one way to do this.


Facebook activity