Subscribe by Email


Sunday, March 31, 2013

Software development - Keeping track of stuff such as images across versions - Part 2

This is a series of posts on the handling of items other than code during the development of different versions of a software application. In the first post of this series (Keeping tack of images and other content during software development - Part 1), I took the case of the icons, images and other graphics that are used in a software product and which may be used as the same across different versions or updated when a new version of the product is used. For example, the application icon would be updated everytime a new version is released, so that it can also be differentiated from previous versions and also seek to graphically display the theme of that specific release. I will be adding more details in this post, focusing on the graphics and images.
The theme of this series is about how we do a lot of effort to ensure tracking of different versions of the code of a product (given that the code is seen as the most important valuable of the release in terms of being its Intellectual Property - and the IP of products such as Photoshop, MS Office, and others is very valued by the organizations that build these products) in a code repository, and with the use of processes such as labeling and branching, easily able to track which version of code was contained in which release.
However, my experience in multiple versions of a major software application led me to believe that this same sort of focus is not put on the non-code items that also form part of the application. I have already talked about the graphics and images that are used in an application and how my experience led to believe that organizations can very disregard the need for having a good process regarding these items. Let me continue on the same example that I was using earlier.
We had a team that would provide us the graphics and we would go through several rounds of discussions, sharing these graphics with the core team and the team that would create the installers that were an integral part of the application. However, the process was so haphazard in terms of their sending us the original set and later versions by email (as .zip attachments), and the process of reviewing them (even though we tried various options such as using Excel, Wiki, or even filing for changes as defects) did not provide a high level of confidence that the process would be error-free. And so it turned out, in the sense that invariably we would find out that some graphic was missing, and so on and so forth. Further, there was an additional element of complexity in this discussions, since the size required of the graphics was different.
The application dialogs, the installer dialogs and other such dialogs had different requirements in terms of size of such graphics, and so there was a need to ensure that we were communicating these different size requirements, and then receive the different sizes. Since they were producing such graphics in a large size, it eventually turned out better for them to send us multiple sizes of each graphic and we could pick and choose the one we liked. We did try using a more process oriented tool for this purpose, but getting a creative team to follow tight processes and the like were very problematic and caused additional stress for the program manager of the team once the process was put in place and the creative team did not follow the defined processes. As a further example, once they provided us these sizes, there was no effective place other than a folder on a server to save these, and if you needed a different size in the next version of the product, it could be real hard to work these.

More in the next version of this post (Keeping track of stuff such as graphics and others across software versions - Part 3 - TBD).


Friday, March 29, 2013

Software development - Keeping track of stuff such as images across versions - Part 1

When you are working on a software, it is not just code that you need to be worried about. A typical large software will contain a large amount of stuff besides the code that your software engineers write. When you look at the menus of the software, there might be cute icons to depict certain functions, or even if you look at the typical Undo option that sits in the Edit menu of a large number of options, there would be a button that you could hit for doing the Undo. If you look at Microsoft office recent versions, there would be a ribbon kind of menu at the top that will have a lot of options, and all of these are complemented by the visual appearance of icons. Typically, software makers try to ensure that their software has easy to remember buttons / icons that users remember and which makes the software apparently more usable by consumers.
However, for code, you would use some sort of code repository; but the same is not truly the same for non-code items such as buttons and other graphic options. Well, actually you can store such icons / visual items in the source safe but you will not be able to do operations such as comparisons between different versions of these items, and in fact, only when you actually export the history of such graphics from the source safe, will you be able to visualize what a previous version of the graphic items looks like.
One of the big challenges that software makers have when using such graphics in their application is about maintaining them. Consider the fact about the application icon (this is typically the graphic that is seen on the desktop, or in numerous other places where the name of the software is referenced). But, as you move from one version to another, you will need to have an application icon that is different from the icon used with the previous version. This helps you to mark that specific version of the software with a specific icon. So what do you do with the icon that you were using for the previous version ? You still need to save it somewhere since it was used in the previous version, and from time to time, people want to reference these icons and other imagery used within the product (once, we had a case where we were in version 7 of the product, and the management team wanted us to present the main graphics (including the application icon) that were being used in the previous 5 versions of the software; the idea was that there was a review team that was looking at a co-relation between the icon and the main focus of that previous release and whether these graphics were effective. It was hard to find the correct imagery, and the specialized team that had been preparing the graphics for us had suffered a lot of attrition over the years; they were unable to provide us the data that we needed. Finally, we had to do a lot of installation of previous versions, compare these with the versions stored in the source safe and then finally come up with the list.).
The idea of the previous story was about how it is much more difficult to maintain items that are different from code; since with code, the source safe will do a quick text comparison and make out the changes in code over a period of time. And mature large applications can have huge number of graphics embedded all over the software, and it is hard to find somebody in the team who will have the knowledge of which graphic was added at what period of time, and more important, the reason for why a specific graphic was added. And there are more complications that come into the picture. Consider that you are using a graphic in your software that was contributed by a team member and it was included in the software, and this happened over 4 years back. The graphic is still being used, and suddenly there is a legal challenge hit for the company over the rights of the graphic. At this time, you better have a good system of maintaining records to ensure that you can solve such a problem.

The next post will be a continuation of this series (Keeping track of stuff such as graphics in the software - Part 2).


Thursday, March 28, 2013

What is the basic principle behind Dynamic synchronous transfer mode (DTM)?


- Dynamic synchronous transfer mode or DTM is one of the most interesting of all the networking technologies. 
- The basic objective behind implementing this technology is to achieve high speed networking along with the transmissions of top quality.
- It also possesses the ability of adapting the bandwidth in varying traffic conditions quickly. 
- DTM was designed with the purpose of being used in integrated service networks including both the one to one communication and distribution.
- Furthermore, it can be used in application to application communication. 
- Nowadays, it has also found its use as a carrier for IP protocols (i.e., high layer protocols). 
- DTM is a combination of 2 basic technologies namely packet switching and circuit switching. 
- It is because of this that the DTM has many advantages to offer. 
- It also comes with a number of services access solutions for the following fields:
Ø  City networks
Ø  Enterprises
Ø  Residential as well as other small offices
Ø  Content providers
Ø  Video production networks
Ø  Mobile network operators

Principles of Dynamic synchronous transfer mode (DTM)

 
- This mode has been designed to work up on a unidirectional medium. 
- This medium also supports multiple access i.e., all the connected nodes can share it. 
- It can be built up on various topologies such as:
  1. Ring
  2. Double ring
  3. Point – to – point
  4. Dual bus and so on.
- TDM or time division multiplexing is what up on which the DTM is based. 
- Here, a fiber link’s transmission capacity is broken down in to smaller units of time. 
- The total link capacity is broken down in to frames of fixed size of 125 microseconds. 
The frames are then further subjected to division in to time slots of 64 bit. 
- How many time slots will be there in one frame is determined by its bit rate. 
- These time slots consist of many separate control slots and data slots. 
- In some cases more control slots might be required, then the data slots can be turned in to control slots or vice versa.
- The nodes that are attached to the link possess the right to write both the kinds of slots. 
As a consequence of this, same time slot position will be occupied by the all the time slots within each frame. 
- Each node possesses the right to at least one slot which can be used by the node for transmitting control messages to the other nodes. 
- These messages can also be sent when requested by the user as a response to messages sent by the other nodes or for some purpose of network management.
- A small fraction of the whole capacity is constituted by the control slots, while a major part is taken by the data slots that carry payload. 
- With the number of control slots, the signaling overhead in DTM varies though it is usually very low.
- Whenever a communication channel is established, a portion of the available data slots is allocated to the channel by the node. 
- There has been an increasing demand of the network transfer capacity because of the globalization of the network traffic and integrated audio, video and data transmission. 
Optical fibers’ transmission capacity is increasing by great margins when compared to any other processing power. 
- DTM still holds the promise for providing full control to the network resources.


Wednesday, March 27, 2013

Product / Program Manager - Provides an example of lessons learnt

A project / program manager (from this point on, I will use the term Project Manager only for ease of writing, with the roles being similar in some respects) has many responsibilities as part of a software project, with a full detailing of such responsibilities being much beyond the scope of a single article. This article focuses more on the responsibilities of the project manager being the repository of a lot of experience related to risks ongoing in a project, and the advantages that come from being able to quickly recognize a risk and figuring out how best to alleviate it.
If you are a project manager, you would know this situation to happen often enough. There is a problem that starts to appear in a project, and you are able to draw upon your experience in this project or in similar projects, and as a result, you are able to take on the required actions that would ensure that the risk is contained (or quickly highlighted, if that be the need). Not all risk histories or course of action can be carried in a risk index, and the action that is taken by a project manager is based on his/her own set of thoughts and principles, so there is no one size fits all kind of next steps.
Typically, once you see a situation developing in a project, you will have some idea about whether the situation developing can quickly reach critical size, or is a small enough contained risk that does not need too much attention at this point. For example, I remember a recent case where a component vendor reported some issues in terms of test reports from their end of the component, and there was history already there in the mind of senior team members about a similar case from 2 years back. At that time, a reading of the situation seemed that there was not a problem, and it quickly caught fire until we spent the better part of week dousing the flames, repairing our burnt behinds, and learning from such an experience.
Once we saw the trend of such a situation, we quickly arranged a quick call with the vendor, figured out the problem area, and learnt that the vendor was under-stating the problem. If we had included this new version of the component with our product during the development phase, around 5% of our testing base (including external vendors) would have lost their data. The last time this happened, we had senior management being called in by some of the more irate external testers, and the inclination to minimize the recurrence of this issue was quickly minimized, and we got the situation in control within a short period of time.
Now, this was a problem. Out of the senior members of the team, we had lost 2 out of the 5 members due to attrition, and the new members did not have the history of this situation, and this was not an easy matter to capture in the database that we used to capture risks. We were lucky that we were involved with the same project, and hence knew the history of this problem and were able to quickly devise a solution that would prevent the worst impact of such a problem. This was why I wrote about how having a history of the project can enable quick and correct reactions to problems (there are some problems of course when you have the same person in the same project for a long time). With attrition, some valuable experience can be lost, and can lead to situations where the team has to spend more time reacting to problems that would be clearer with historical backgrounds.


Tuesday, March 26, 2013

What is meant by Instant Messaging?


Almost everyone today is familiar with the term ‘Instant messaging’ or IM in short. 
- This is a type of communication that sustains over the internet and quickly transmits the text based messages between people (i.e., senders and receivers). 
- The basic purpose of IM is to provide “real time direct written language–based online chat”. 
- It does this through push mode and other shared clients between two or more people using personal communication devices (mobiles) and personal computers. 
- A network such as the internet is used for conveying the text message for the person it is intended for. 
- IM addresses two kinds of communications namely:
  1. Point – to – point communications i.e., from one person to another one person.
  2. Multicast communications i.e., from one sender to a number of receivers.
- Nowadays, much enhanced modes of communication have been introduced by advanced instant messaging services. 
- Some enhancements are inclusion of video chat, audio calling and hyperlinks to other media etc. 
- The umbrella term is the online chat under which the concept of IM falls. 
- The similarity between them is that they are text based, happens in real time and offer bi – directional flow of messages. 
- The only distinction is that the IM is based up on clients. 
- The connections between the known users are facilitated by these clients only through a contact list (also known as friend list or buddy list).
- On the other hand, chat works up on web – based applications that facilitate communication between multiple users.
- A number of communication technologies that facilitates text based communication are combined together to provide the service of instant messaging. 
- The biggest feature of the IM – chats is that they take place in real time just like a phone call. 
- IM is different from the other web services such e–mail in the sense that users here perceive the “quasi – synchronicity of the communications”. 
- In IM, you can message only those people who are online at that time. 
However, there exist some systems that allow you to message offline people, thus drawing some similarity between e–mail and IM.
- IM is cheap and effective means for efficient communication. 
- It allows immediate receipt of the message and also enables us to reply back immediately. 
- However, it is not necessary that the transaction control might support the IM. 
- In some cases there are additional features that make IM more interesting as mentioned below:
  1. Enabling users to see each other via web cam.
  2. Using headphones and microphones and talk for free over internet.
  3. File transfers
  4. Saving a text conversation for future reference.
- Instant messaging came much before the internet and first appeared on systems such as multics (multiplexed information and computing service) and CTSS or compatible time sharing system which are multi – user systems. 
- Some of the IM services such as ytalk, talk, ntalk and so on, peer – to – peer protocol.
Some other examples of early IM services are:
  1. Zephyr notification service
  2. Bulletin board system or BBS
  3. Freelancin round table
  4. Compuserve CB simulator: this was the first dedicated online chat service
- Real time text was also a feature of these early instant messaging services.   
AOL’s real time IM implements the modern real time text feature as an optional feature. 
- Video calling features such as web conferencing services can integrate both IM abilities and video calling.


Monday, March 25, 2013

What is Dynamic synchronous transfer mode (DTM)?


Dynamic synchronous transfer mode or the DTM is a technology developed for optical networking. The ETSI (i.e., the European telecommunications standards institute) standardized this technology in the year of 2001 marked with the following beginning specification ‘ETSI ES 201 803 – 1’. 
This is a circuit switching network technology that doubles as a time division multiplexing technology too. Actually, this technology is built up on a combination of the switching and transport.
This technology guarantees to provide QoS or quality of service for services that are involved with the streaming of videos. 

However, it might be used for packet – based services also. It is marketed for the following:
  1. Professional media networks
  2. Mobile TV networks
  3. DTT or digital terrestrial television networks
  4. Content delivery networks
  5. Consumer oriented networks (for example, triple play)

What is Switching?

- Switching of the channels is specified by DTM. 
- This is what that makes it different from the other transmission techniques that we have, for example, SONET (synchronous optical networking), SDH (synchronous digital hierarchy) and so on. 
- End to end provisioning is done for the DTM channel over a network with general topology through the use of control signaling.
DTM therefore represents a circuit switched system. 
- The switches are nothing but time space switches that guarantee the QoS property. 
- The allocation of the resources is done physically for each channel in the switch. 
- This is quite contrary to the switches that are based up on packets or cells. 
- In those kind of switches there is always a competition for resources between the packets and cells. 
- Such a competition leads to delaying and discarding of the packets and cells. - Other methods offer a shared resource allocation mechanism that draws a limit for the packet and cell switches regarding their utilization of the network in such way that the QoS is maintained at a certain level. 
- But DTM does not follow this shared allocation mechanism rather it implies that a network can be loaded up to full limit theoretically and still can guarantee the QoS. 
Thus, here real utilization is more like a question of adaptation of the network topology as well as its link capacities considering the actual traffic matrix.

- Packet and cell based switching technologies are more suited to statistical multiplexing.
- It means whenever a packet streams in a router come at an outgoing link that is common to all of them, buffering is carried out until the resources are free on that particular link.
- In this way, it becomes possible to make use of the outgoing link to the maximum degree possible without causing many delays. 
- This also proves fitting for the best effort traffic. 
- But there are certain QoS requirements of the streaming media that cannot be ignored. 
- Streaming traffic is by nature not statistical and therefore is better maintained by end to end resource allocation.

- This category is applicable for audio and video services.
- This is not exclusive of the IP traffic gained via guaranteed QoS transport if majority of the content is audio and video. 
- Some other technologies such as that of IP and Ethernet were also adopted for the same purpose. 
- Multi protocol label switching or MPLS can be applied to the carriage network for improving the reliability as well as determinism that is required by most of the streaming media. 
- This technology is applied along with the techniques such as the forward error correction.
- Ethernet has been made supportive for audio and video transmission by improvement in technologies such as the provider backbone bridge traffic engineering. 
- The development of dynamic synchronous transfer mode took place at the royal institute of technology. 


Sunday, March 24, 2013

What are types of artificial neural networks?


In this article we discuss the types of artificial neural networks. These models simulate the real life biological system of nervous system.
1. Feed forward neural network: 
- This is the simplest type of neural network that has been ever devised. 
- In these networks the information flow is unidirectional; therefore the data moves only in forward direction. 
- From input nodes data flows to the output nodes via hidden nodes (if there are any). 
- In this model there are no loops or cycles. 
- Different types of units can be used for constructing feed forward networks for example, McCulloch – pitts neurons.
- Continuous neurons are used in error back propagation along with the sigmoidal activation.
2. Radial basis function network: 
- For interpolating in a multi – dimensional space radial basis functions are the most powerful tools. 
- These functions can be built in to criterion of distance with respect to some center.
- These functions can be applied in the neural networks. 
- In these networks, sigmoidal hidden layer transfer characteristic can be replaced by these functions.
3. Kohonen self–organization network: 
- Un–supervised learning is performed with the help of self – organizing map or SOM. 
- This map was an invention of Teuvo Kohonen.
- Few neurons learn mapping points in the input space that could not coordinate in the output space. 
- The dimensions and topology of the input space can be different from those of the output space. SOM makes an attempt for preserving these.
4. Learning vector quantization or LVQ: 
- This can also be considered as neural network architecture. 
- This one also was a suggestion of Teuvo Kohonen.  
- In these prototypical representatives are parameterized along with two important things namely, a classification scheme based - up on distance and a distance measure.
5. Recurrent neural network: 
- These networks are somewhat contrary to the feed forward networks. 
- They offer a bi–directional flow of data.
- On a feed forward network data is propagated linearly from input to output. 
- Data from later stages of processing is also transferred to its earlier stages by this network. 
- Sometimes these also double up as the general sequence processors. 
- Recurrent neural networks have a number of types as mentioned below:
Ø  Fully recurrent network
Ø  Hopfield network
Ø  Boltzmann machine
Ø  Simple recurrent networks
Ø  Echo state network
Ø  Long short term memory network
Ø  Bi – directional RNN
Ø  Hierarchical RNN
Ø  Stochastic neural networks
6. Modular neural networks: 
- As per the studies have shown that human brain works actually as a collection of several small networks rather than as just one huge network, this ultimately helped in realizing the modular neural networks where smaller networks cooperate in solving a problem. 
- Modular networks are also of many types such as:
Ø  Committee of machines: Different networks that work together on a given problem are collectively termed as the committee of machines. The result achieved through this kind of networking is quite better than what is achieved with the others. The result is highly stabilized.
Ø  Associative neural network or ASNN: This is an extension of the previous one. And extends a little beyond the weighted average of various models. This one is a combined form of the k- nearest neighbor technique (kNN) and the feed forward neural networks. Its memory is coincident with that of the training set.
7. Physical neural network: 
- It consists of some resistance material that is electrically adjustable and capable of simulating the artificial synapses.
There are other types of ANNs that do not fall in any of the above categories:
Ø  Holographic associative memory
Ø  Instantaneously trained networks
Ø  Spiking neural networks
Ø  Dynamic neural networks
Ø  Cascading neural networks
Ø  Neuro – fuzzy networks
Ø  Compositional pattern producing networks
Ø  One – shot associative memory


Friday, March 22, 2013

What is an Artificial Neural Network (ANN)?


- The artificial neural network or ANN (sometimes also called as just neural network) is a mathematical model that has got its inspiration from the biological neural networks. 
- This network is supposed to consist of several artificial neurons that are interconnected. 
- This model works with a connectionist approach for computing and thus processes information based up on this only. 
- In a number of cases, the neural network can act as an adaptive system that has the ability of making changes in its structure while it is in some learning phase. 
- These networks are particularly used in searching patterns in data and for modeling the complex relationships that exist between the outputs and inputs. 
An analogy to artificial neural network is the neuron network of the human brain. 
- In an ANN, the artificial nodes are termed as the neurons or sometimes as neurodes or units or the ‘processing elements’. 
They are interconnected in such a way that they resemble a biological neural network. 
- Till now, no formal definition has been given for the artificial neural networks. - These processing elements or the neurons show a complex global behavior. 
The connections between the neurons and their parameters is what that determines this behavior.
- There are certain algorithms that are designed for altering the strength of these connections in order to produce the desired flow of the signal. 
- The ANN operates up on these algorithms. 
- As in biological neural networks, in ANN also functions are performed in parallel and collectively by the processing units.
- Here, there is no delineation of the tasks that might be assigned to different units. 
- These neural networks are employed in various fields such as:
  1. Statistics
  2. Cognitive psychology
  3. Artificial intelligence
- There are other neural network models that emulate biological CNS and are part of the following:
  1. Computational neuroscience
  2. Theoretical neuroscience
- The modern software implementation of the ANNs prefers a more practical approach than biologically inspired approach. 
- This practical approach is based up on the signal processing and statistics. The former approach has been largely abandoned. 
- Many times parts of these neural networks serve as components for the other larger systems that are a combination of non – adaptive and adaptive elements.
- Even though a more practical approach for solving the real world problems is the latter one, the former has more to do with the connectionist models of the traditional artificial intelligence. 
- Well the common thing between them is the principle of distributed, non – linear, local and parallel processing and adaptation. 
- A paradigm shift was marked by the use of neural networks during the late eighties. 
- This shift was from the high level artificial intelligence (expert systems) to low level machine learning (dynamical system). 
- These models are very simple and define functions such as:
f: X à Y
- Three types of parameters are used for defining an artificial neural network:
a)   The interconnection pattern between neuron layers
b)   The learning process
c)   The activation function
- The second parameter updates the weights of the connections and the third one converts the weighted input in to output. 
- Learning is the thing that has attracted many towards it. 
- There are 3 major learning paradigms that are offered by ANN:
  1. Supervised learning
  2. Un – supervised learning
  3. Reinforcement learning
- Training a network requires selecting from a set of models that would best minimize the cost.
- A number of algorithms are available for training purpose where gradient descent is employed by most of the algorithms.
- Other methods available are simulated annealing, evolutionary methods and so on.


Thursday, March 21, 2013

What are principles of autonomic networking?


The complexity, dynamism, heterogeneity and so on are on ever rise. All these factors are making the infrastructure of our network insecure, brittle and un – manageable. Today’s world is so dependent on networking that its security and management cannot be risked. In terms of networking, we call this the ‘autonomic networking’. 
The goal of building such systems is to realize such network systems that have capability of managing themselves as per the high level guidance provided by the humans. But meeting this goal calls for a number of scientific advances and newer technologies.

Principles of Autonomic Networking

A number of principles, paradigms and application designs need to be considered.

Compartmentalization: This is a structure having extensive flexibility. The makers of autonomic systems prefer this instead of a layering approach. This is the first target of the autonomic networking.

Function re–composition: An architectural design has been envisioned that would provide highly dynamic, autonomic and flexible formation of the networks on a large – scale. In such architecture, the functionality would be composed in a fashion that is autonomic.

Atomization: The functionality are broken down in to smaller atomic units. Maximum re - composition freedom is made possible by these atomic units.

Closed control loop: This is one of the fundamental concepts of the control theory. It is now also counted among the fundamental principles of the autonomic networking. This loop is known for controlling and maintaining the properties of the controlled system as per the desired bounds. The target parameters are constantly monitored within the desired bounds.

The human autonomic nervous system is what that inspires the autonomic computing paradigm. An autonomic computing paradigm must then have a mechanism by virtue of which it can change its behavior according to the change in various essential variables in the environment and bring it back itself in to the state of equilibrium. 
Survivability can be viewed in the terms of following in case of autonomic networking:
  1. Ability to protect itself
  2. Ability to recover from the faults
  3. Ability to reconfigure itself as per the environment changes.
  4. Ability to carry out its operation at an optimal limit.
The following two factors affect the equilibrium state of an autonomic network:
  1. The internal environment: This includes factors such as CPU utilization, excessive memory and so on.
  2. The external environment: This includes factors such as safety against external attacks etc.
There are 2 major requirements of an autonomic system:
  1. Sensor channels: These sensors are required for sensing the changes.
  2. Motor channels: These channels would help the system in reacting and overcoming the effects of the changes.
The changes that are sensed by the sensor are analyzed for determining the viability limits of the variables. If the variables are detected out of this limit, then the system plans what changes it should introduce in to the system to bring them in their limit, thus bringing back the system in to its equilibrium state. 


Wednesday, March 20, 2013

What are components of autonomic networking?


The concept of the autonomic systems has been derived from a biological entity called the autonomic nervous system (ANS). In human body this system is responsible for carrying out functions such as blood pressure and circulation, respiration and emotive response. 
In this article we discuss about the various components of the autonomic networking.

Components of Autonomic Networking

Autognostics: 
- This category of autonomic components includes capabilities such as that of awareness, self – discovery and self – analysis. 
- With all these capabilities, an autonomic system is capable of having a high – level view. 
- In other words, we can say that perceptual sub–systems are represented by it which serves the purpose of gathering, analyzing and reporting on the conditions and states of the system. 
- These components provide a basis to the system for responding and validating its decisions. 
- In simple words, autognostics provide self – knowledge. 
- This component if is rich, might provide various perceptual senses. 
-In autonomic systems, models of both the external and internal environments are embedded through which perceived threats and states can be assigned some relative value. 
- When it comes to autonomic networking, inputs from the following are taken for defining the state of the network:
a) Various network elements such as network interfaces and switches (inclusive of the current state and specification and configuration.
b) End – host
c)  Traffic flows
d) Logical diagrams
e) Design specifications
f)   Application performance data
- This component inter operates with the other components of the autonomic system.

Configuration management: 
- The responsibility for the interactions that take place among the interfaces and the elements.
- It consists of an accounting capability with which it is possible to track the configurations over the time under various circumstances. 
- Metaphorically, they act as the memory for the autonomic systems. 
- Provision and the remediation over a network can be applied through the configuration settings.
- In addition to these, two other things which can be applied are the selective performance and the implementation affecting access.
- This category only contains the actions that are taken by the human engineers. 
- There are a very few exceptional cases where the interface settings are configured manually using the automated scripts. 
- The dynamic population of the devices is maintained implicitly.
- This component must have the capability operating on all devices and to recover the old configuration settings. 
- There can be some situations where the states may become unrecoverable. 
Therefore, the sub – system must be capable of assessing the consequence of the changes before they are issued.

Policy management: 
- This component is inclusive of the following:
a)   Policy specification
b)   Deployment
c)   Reasoning over the policies
d)   Update of policies
e)   Maintenance of the policies
f)    Enforcement
- The reasons for including this component are:
a)  Configuration management
b)  Definition of the roles and relationships
c)  Establishment of trust and reputation
d)  Description of business processes
e)  Definition of performance
f) Constraints on behavior issues such as privacy, resource access, collaboration and security.
- It represents a model of ideal behavior and environment representing effective interaction.
- For defining the constituents of a policy it is important to know what all is involved in its management.

Autodefense: 
- The mechanism presented by this component is both dynamic and adaptive in nature.
- This mechanism has been developed to keep the network infrastructure safe from the malicious attacks. 
- Further, it also prevents the illegal use of the infrastructure for attacking the various technological resources. 
- This component has the capability of striking a balance between the various performance objectives that have threat management actions. 
- This component can be compared to the immune system of the human body.

Security: 
The structure provided by the security component is responsible for defining and enforcing the relationships between the following:
a)   Roles
b)   Content
c)   resources


Facebook activity