Subscribe by Email


Showing posts with label User. Show all posts
Showing posts with label User. Show all posts

Monday, December 10, 2018

Customers forum - Need to monitor and update

I was going through a forum for one of those new age smart watches, and one could see the level of frustration among some users because they felt that their feedback was not being taken. Now, there might be an argument that an organization cannot really respond to every level of feedback that might be posted anywhere, but in this case, this was a user forum that was present on the site of the product, and users have a reasonable expectation that any such forum would be a way to present their feedback and the company does respond on some of the feedback at least from time to time. However, there was a feeling that was developing among regular users on the forum that any suggestions they made were not being responded to. So, even if employees from the company were picking up the feedback, the response loop was not being completed and users were not getting the impression that their feedback was being responded to.
This led to a level of frustration among the users, and even if new users commented on something, they were told by the regular users that there was no point in giving feedback or suggestions since the company did not respond on the forum. This seems like a reputation that is damaging for the company. When you ask any customer representation or product management from reputed companies, they say that users are the source of a number of different suggestions and defects, and these provide the company an invaluable source from which they can iterate further and select future features. Getting such feedback, especially when people are searching for a consumer forum, come to the site and provide their feedback is something that companies should really want, and ideally, the company should train customer support as well as product management to monitor and intervene in customer forums - this helps in getting inputs and keep customers interested enough to come and monitor the forums. In fact, if you look at another angle, many companies spend a lot of time and money to have beta forums that float new features and get user feedback; this is not exactly a beta forum, but is still a way to get inputs.
What should ideally be done ? There should be customer forums where representatives of customer support and product management should be empowered to regularly monitor and post in, and even members of the product team should get a quick training on how to respond in user forums and visit them. Members of product teams in many cases typically have strong opinions on new features, workflows and so on - exposure to customers and their actual world problems helps to make them more open to new ideas and understand user workflows in a better manner.


Thursday, August 29, 2013

How can traffic shaping help in congestion management?

- Traffic shaping is an important part of congestion avoidance mechanism which in turn comes under congestion management. 
- If the traffic can be controlled, obviously we would be able to maintain control over the network congestion. 
Congestion avoidance scheme can be divided in to the following two parts:
  1. Feedback mechanism and
  2. The control mechanism
- The feedback mechanism is also known as the network policies and the control mechanism is known as the user policies.
- Of course there are other components also but these two are the most important. 
- While analyzing one component it is simply assumed that the other components are operating at optimum levels. 
- At the end, it has to be verified whether the combined system is working as expected or not under various types of conditions.

Network policy has got the following three algorithms:

1. Congestion Detection: 
- Before information can be sent as the feedback to the network, its load level or the state level must be determined. 
- Generally, there can be n number of possible states of the network. 
- At a given time the network might be in one of these states. 
- Using the congestion detection algorithm, these states can be mapped in to the load levels that are possible. 
- There are two possible load levels namely under-load and over-load. 
- Under-load means below the knee point and overload occurs above knee point. 
- If this function’s k–ary version is taken, it would produce k load levels. 
- There are three criteria based up on which the congestion detection function would work. They are link utilization, queue lengths and processor utilization. 

2. Feedback Filter: 
- After the load level has been determined, it has to be verified that whether or not the state lasts for duration of sufficiently longer time before it is signaled to the users. 
- It is in this condition that the feedback of the state is actually useful. 
- The duration is long enough to be acted up on. 
- On the other hand a state that might change rapidly might create confusion. 
The state passes by the time the users get to know of it. 
- Such states misleading feedback. 
- A low pass filter function serves the purpose of filtering the desirable states. 

3. Feedback Selector: 
- After the state has been determined, this information has to be passed to the users so that they may contribute in cutting down the traffic. 
- The purpose of the feedback selector function is to identify the users to whom the information has to be sent.

User policy has got the following three algorithms: 

1.Signal Filter: 
- The users to which the feedback signals are sent by the network interpret them after accumulating a number of signals. 
- The nature of the network is probabilistic and therefore signals might not be the same. 
- According to some signals the network might be under-loaded and according to some other it might be overloaded. 
- These signals have to be combined to decide the final action. 
- Based up on the percentage, an appropriate weighting function might be applied. 

2. Decision Function: 
- Once the load level of the network is known to the user, it has to be decided whether or not to increase the load.
- There are two parts of this function: the direction is determined by the first one and the amount is decided by the second one. 
- First part is decision function and the second one is increase/ decrease algorithms. 

3. Increase/Decrease Algorithm: 
- Control forms the major part of the control scheme.
- The control measure to be taken is based up on the feedback obtained. 
- It helps in achieving both fairness and efficiency. 


Wednesday, August 28, 2013

What are different policies to prevent congestion at different layers?

- Many times it happens that the demand for the resource is more than what network can offer i.e., its capacity. 
- Too much queuing occurs in the networks leading to a great loss of packets. 
When the network is in the state of congestive collapse, its throughput drops down to zero whereas the path delay increases by a great margin. 
- The network can recover from this state by following a congestion control scheme.
- A congestion avoidance scheme enables the network to operate in an environment where the throughput is high and the delay is low. 
- In other words, these schemes prevent a computer network from falling prey to the vicious clutches of the network congestion problem. 
- Recovery mechanism is implemented through congestion and the prevention mechanism is implemented through congestion avoidance. 
The network and the user policies are modeled for the purpose of congestion avoidance. 
- These act like a feedback control system. 

The following are defined as the key components of a general congestion avoidance scheme:
Ø  Congestion detection
Ø  Congestion feedback
Ø  Feedback selector
Ø  Signal filter
Ø  Decision function
Ø  Increase and decrease algorithms

- The problem of congestion control gets more complex when the network is using a connection-less protocol. 
- Avoiding congestion rather than simply controlling it is the main focus. 
- A congestion avoidance scheme is designed after comparing it with a number of other alternative schemes. 
- During the comparison, the algorithm with the right parameter values is selected. 
For doing so few goals have been set with which there is an associated test for verifying whether it is being met by the scheme or not:
Ø  Efficient: If the network is operating at the “knee” point, then it is said to be working efficiently.
Ø  Responsiveness: There is a continuous variation in the configuration and the traffic of the network. Therefore the point for optimal operation also varies continuously.
Ø Minimum oscillation: Only those schemes are preferred that have smaller oscillation amplitude.
Ø Convergence: The scheme should be such that it should bring the network to a point of stable operation for keeping the workload as well as the network configuration stable. The schemes that are able to satisfy this goal are called convergent schemes and the divergent schemes are rejected.
Ø Fairness: This goal aims at providing a fair share of resources to each independent user.
Ø  Robustness: This goal defines the capability of the scheme to work in any random environment. Therefore the schemes that are capable of working only for the deterministic service times are rejected.
Ø  Simplicity: Schemes are accepted in their most simple version.
Ø Low parameter sensitivity: Sensitivity of a scheme is measured with respect to its various parameter values. The scheme which is found to be too much sensitive to a particular parameter, it is rejected.
Ø Information entropy: This goal is about how the feedback information is used. The goal is to get maximum info with the minimum possible feedback.
Ø Dimensionless parameters: A parameter having the dimensions such as the mass, time and the length is taken as a network configuration or speed function. A parameter that has no dimensions has got more applicability.
Ø Configuration independence: The scheme is accepted only if it has been tested for various different configurations.

Congestion avoidance scheme has two main components:
Ø  Network policies: It consists of the following algorithms: feedback filter, feedback selector and congestion detection.
Ø  User policies: It consists of the following algorithms: increase/ decrease algorithm, decision function and signal filter.
These algorithms decide whether the network feedback has to be implemented via packet header field or as source quench messages.




Saturday, July 20, 2013

What are data gram sub-nets?

- A data gram is defined as the basic transfer unit used in the networks that operate with the help of packet switching network. 
- In such networks, the time of the arrival and delivery is not guaranteed. 
- Also, the network services do not guarantee that whether it will be an ordered delivery or not. 
- The first project to use the data grams was the CYCLADES which was again a packet switching network. 
- The hosts in this network were responsible for making a reliable delivery rather than relying on the network for doing so. 
- This they did using the data grams that were themselves so unreliable and by associating the mechanisms of the end to end protocols. 
- According to Louis Pouzin, there are two sources from which came the inspiration for the data grams namely the Donal Davie’s studies and simplicity of the things. 
- The concept of the data gram sub-net was eventually adopted for the formulation of the protocols such as apple talk, Xerox network systems and of course the internet protocol.
- Data grams are used at the first 4 layers of the OSI model. 
- Each layer has its own name for the data grams as we mention below:
  1. Layer 1: chip (CDMA)
  2. Layer 2: frames (IEEE 802.3 and IEEE 802.11), cell (ATM)
  3. Layer 3: data packet
  4. Layer 4: data segment
- A data gram is a data packet that is self-reliant. 
- This means it does not rely on any of the exchanges made earlier since the fixed connection between the two points of communication has no connection such as in a majority of the telephonic conversations. 
- Virtual circuits and data gram sub-nets are two equally opposite things. 

Data gram is defined as an independent and self-contained data entity by the RFC 1594 that carries sufficient information required for routing from one source to another without relying on the transporting network and the earlier exchanges between the two same hosts.

- The services offered by the data gram sub nets can be compared to the mail delivery services. 
- This is so because the user needs to mention only the destination address.
- However, this service does not give any guarantee of whether the data gram will be delivered or not and also does not provide any confirmation upon successful delivery of the packet. 
- These are of course two major disadvantages of the data gram sub nets. 

- In data gram sub nets, the data grams or the data packets are routed along a route that is created at the same time. 
- In data gram sub nets the routes are not predetermined. 
- This again has its disadvantages. 
- Also, the order in which the data grams have to be sent or received is not considered. 
- In some cases, a number of data grams having same destination might travel along various different routes.

- There are two components of every data gram namely the header and the data payload.
- The former consists of all the information that is enough for the routing purpose from source to the destination without being dependent on the exchanges that were made before between the network and the equipment. 
The source as well the destination address might be included in the header as a kind of a field. 
- The data that is to be transmitted is stored in the latter part of the data gram. 
- In some cases the data payloads might be nested in to the tagged header. 
This process is commonly known as the encapsulation. 
- There are various types of data grams for which various standards are defined by the internet protocol or IP. 


Monday, July 1, 2013

What is the difference between TCP and UDP?

TCP (transmission control protocol) and UDP (user datagram protocol) are two very important protocols. These two protocols are transportation protocol.  These two protocols are counted in the core protocols of the IP suite. 
These two protocols operate at the 4th layer i.e., the transport layer of the TCP/ IP model but the usage of the both the protocols is used very differently.

1. Reliability: 
- UDP is a connection-less protocol whereas TCP is a protocol that is connection  oriented. 
- Whenever a message is sent, it will not get delivered if there is a connection failure. 
- If during the delivery of the message the connection gets lost, the server will send a request to get the lost part. 
- During the transfer of a message, there is no corruption. 
- The reliability of the UDP is less and if a message is sent, there is no guarantee that it will get delivered, it may get lost on the way. 
- The message might get corrupted during transfer.

2. Ordered: 
- If at the same time two messages are sent along the same connection, one after the other, it is sure that the message which is the first in the line will get delivered there first. 
- The data is therefore delivered always in the same order. 
- You do not have to worry about the order of the arriving data. 
- In the case of UDP, the order of the arrival of data is not sure. 
- The second one can arrive there first before the first one. 

3. Heavyweight: 
- When the order of arrival of the low level parts of the transmission stream is wrong, the requests have to be sent again and again. 
- All the lost parts of the message have to be put together in a proper sequence. 
- So it takes some time for putting back the parts together. 
- On the other hand, UDP is lightweight.
- After sending the message, the user cannot think about tracking connections or ordering of the messages etc. 
- This indeed makes it lot quicker and therefore there is very less work for the network or the OS card for translating the data obtained from the data packets.

4. Streaming: 
- In TCP, the data is read in form of a stream.
- There is nothing that distinguishes one data from another. 
- Per read call there can be a number of packets. 
- In UDP each of the data packets is sent individually and if they arrive, they do so in whole form. 
- Here per read call, only one packet is sent.

5. Some examples of the TCP are FTP or file transfer protocol, World Wide Web (such as Apache TCP port 80), secure shell such as open SSH port 22 and so on. Examples of UDP are TFTP (trivial file transfer protocol), VoIP (voice over IP), IPTV, online multiplayer games, domain name system (such as DNS UDP port 53) etc.

6. Error-checking: 
- The TCP protocol offers extensive error checking mechanisms for the acknowledgement of the data, flow control and so on. 


In TCP, a connection is a must to be established in order to transfer data. Datagram mode is the mode in which the user data gram protocol operates. You can choose between the two protocols depending up on the requirements. If the guaranteed delivery of data is required, then the transmission control protocol must be chosen. The user data gram protocol comes only with the basic error checking mechanism. It checks the data by the means of the check sums. 


Friday, June 28, 2013

Give advantages of frame relay over a leased phone line?

Frame relay and leased phone lines are two of the physical connection media for setting up the connections. 

Advantages of Frame Relay over Leased Phone Line
- Frame relay is a kind of the standardized WAN (wide area network) technology for specifying the logical link as well as physical link layers of the digital telecommunication channels. 
- It is done by the means of a packet switching methodology.
- The frame relay technology has been designed for transportation across the ISDN (integrated services digital network) infrastructure. 
- Today, it is used in the context of a number of network interfaces. 
- Frame relays are commonly implemented for VoFR (voice over frame relay).  - It is used as an encapsulation technique for the data. 
- The frame relays are used between the WANs and the LANS.
- A private line or a leased line is provided to the user that connects to the frame relay node. 
- The frequently changing path is transparent to the WAN protocols used extensively by the end users. 
- Data is transmitted via these networks and the frame relay network handles all this.
- One advantage of the frame relays over the leased lines is that they are less expensive and this is what that makes the frame relays so popular in the telecommunications industry.
- Another advantage of the frame relays over the leased lines that make them popular is that they have user equipment that can be configured with extreme simplicity in the frame relay network. 
- The usage of the Ethernet over the fiber optics communication is high. 
- This has led to using the frame relay protocol and encapsulation by the dedicated broadband services like DSL and cable modem, VPN, MPLS etc. 
- However, there are a number of rural regions in India where there is still an absence of the cable modem and DSL services.
- In such areas, the only option for the non-dial-up connection is the frame relay line of 64 Kbit/ s.
- Thus, it might be used by some retail chain to connect with the WAN of their corporate. 
- The aim of the designers of the frame relay is to offer a telecommunication service for transmitting the cost efficient data between the various end points in the WAN and the local area networks in an intermittent traffic. 
- The data is put in to units of variable sizes called the frames by the frame relay process. 
- The required error correction process is left up to the end points. 
- This error correction includes re-transmission of the data. 
- This increases the speed of the overall transmission of data. 
- A PVC or the permanent virtual circuit is provided by the network so that when a customer looks at a dedicated connection and not having to pay for leased line that is full time engaged. 
- The route by which each frame travels to the destined end point is figured out by the service provider and thus he decides the charges based up on the usage. 
- A level of the service quality can be selected by the enterprise. 
- The frames can be prioritized while the importance of the other frames is reduced. 
- The frame relay can run on systems such as the following:
Ø  Fractional T – 1
Ø  Full T – carrier
Ø  E – 1
Ø  Full E carrier
- A frame relay provides mid-range services between ATM (asynchronous transfer mode) and the ISDN operating at a speed of 128 Kbps. 
- Not only it provides the services, it also complements them. 
- The base of the frame relay technology is provided by the X.25 packet switching that has been designed for data transmission over the analog voice lines.



Thursday, May 30, 2013

What are the various Desk Scheduling methods?

About Disk Scheduling

The I/O system has got the following layers:
  1. User processes: The functions of this layer including making I/O calls, formatting the I/O and spooling.
  2. Device independent software: Functions are naming, blocking, protection, allocating and buffering.
  3. Device drivers: Functions include setting up the device registers and checking their status.
  4. Interrupt handlers: These perform the function of waking up the I/O drivers up on the completion of the I/O.
  5. Hardware: Performing the I/O operations.
- Disk drives can be pictured as large 1 – D array consisting of logical blocks that are smallest unit of transfer.  
- These blocks are mapped in to the disk sectors in a sequential manner. 
Mapping is done in the same manner. 
- The responsibility of using the hardware efficiently is the duty of the operating system for the disk drives for increasing the speed of access and bandwidth of the disk. 

Algorithms for Scheduling Disk Requests

There are several algorithms existing for the scheduling of the disk requests:

Ø  SSTF: 
- In this method the request having the minimum seek time is selected from the present head position. 
- This method is a modification of the SJF (shortest job first) scheduling and therefore contains some possibility of process starvation.

Ø  SCAN: 
- From one end of the disk, the disk arm starts and continues in the direction of the other end, serving to the requests till the opposite end. 
- At this end the head is reversed and the process continues. 
- This is sometimes called as the elevator algorithm.

Ø  C – SCAN: 
- A better algorithm then the previous one. 
- This one offers a more uniform waiting time than the previous one. 
- The movement of the head is from one end to another while it services the requests encountered along the way. 
- However, the difference is that when it comes to the other it straightaway goes to the beginning without heeding to any of the requests in the way and then again starts. 
- The cylinders are treated as the circular list wrapped around last and the first cylinder.

Ø  C – Look: 
- This is a modified version of the C – SCAN. 
- Here the arm or the head travels only up to the last request rather than going till the far end. 
- Then immediately the direction is reversed and the process continues.

- For disk scheduling it is important that the method be selected as per the requirements only. 
- The first one is the most commonly used and appeals to the needs naturally. 
- For a system where often there is a heavy load on the disk, the SCAN and C- SCAN methods can help. 
- The number as well as the kind of requests affects the performance in a number of ways.
- On the other hand, the file – allocation method influences the requests for the disk services. 
- These algorithms have to be written as an individual module of the OS so that if required it can be replaced with a different one easily. 
- As a default algorithm, the LOOK or the SSTF is the most reasonable choice. 

Ways to attach to a disk

There are two ways of attaching the disk:
Ø  Network attached: This attachment is made via a network. This is called the network attached storage. All such connected storage devices together form the storage area network.
Ø  Host attached: This attachment is made via the I/O port.


All these disk scheduling methods are for the optimization of the secondary storage access and for making the whole system efficient. 


Sunday, May 19, 2013

What are different types of schedulers and their workings?


Scheduling is an important part of the working of operating systems. 
- The scheduler is the component that provides access to the resources to the processes, threads and data flows. 
- These resources may include time of the processor and the communications bandwidth. 
- Scheduling is necessary for effectively balancing the load of the system and achieving the target of QoS or quality of service. 
- Scheduling is also necessary for the systems that do multitasking and multiplexing on a single processor since they need to divide the CPU time between many processes. 
- In multiplexing, it is required for timing the simultaneous transmission of the multiple flows.

Important things about Scheduler

There are 3 things which most concern the scheduler:
  1. Throughput
  2. Latency inclusive of the response time and the turnaround time
  3. Waiting time or the fairness time
- But when practically implemented, conflicts arise between these goals for example between latency and throughput. 
- It is the scheduler that can make a compromise between any two goals. 
Based on the user’s requirements and the objectives it is decided to which goal the preference has to be given. 
- In systems such as the embedded systems and robotics that operate in real time environment, it has to be ensured by the scheduler that the processes are capable of meeting the deadlines. 
- This is a very critical factor in maintaining the stability of the system. 
- The administrative back end is used for managing the scheduled tasks that are then sent to the mobile devices.  

Types of Schedulers

There are 3 different types of schedulers available which we discuss below:

Long term Schedulers or Admission Schedulers: 
- The purpose of this type of scheduler is to decide about the processes and jobs to be admitted or added to the ready queue. 
- When a program makes an attempt for executing a process, it is the responsibility of the long – term scheduler to delay or authorize the request for admitting the process to the ready queue. 
- Thus, what all processes will be executed by the system is dictated by this scheduler. 
- It also dictates about the degree of the concurrency and handling of the CPU intensive and I/O intensive processes. 
- Modern operating systems use this for making sure that there is enough time for the processes to finish of their tasks. 
- Modern GUIs would be of very less use if there was no real time scheduling. 
The long term queue resides in the secondary memory.

Medium term Schedulers: 
- This scheduler serves the purpose of removing the processes from the physical memory and placing them in the virtual memory and even vice versa. 
This process is called swapping out and swapping in. 
- A process that has been inactive for some time might be swapped by the scheduler. 
- It may also swap a process with frequent page faulting, low priority or more amount of memory etc. 
- This is necessary since this makes the space available for other processes.

Short term Schedulers: 
- These schedulers are more commonly known as the CPU schedulers.
- It decides which one out of all the processes will be executed after the clock interrupt, a system call, an I/O interrupt, hardware interrupt and so on. 
- Thus, we can say that the frequency of the short term schedulers of making decisions is much higher than that of the long term and medium term schedulers since after every time slice these schedulers have to decide.
There is one more component that is involved in CPU scheduling but is not counted under schedulers. It is called dispatcher. 


Wednesday, May 15, 2013

What is the Process Control Block? What are its fields?


The task controlling block, switch frame or task struct are the names of one and the same thing that we commonly called as the PCB or the process control block. 
This data structure belongs to the kernel of the operating system and consists of the information that is required for managing a specific process. 
- The process control block is responsible for manifesting the processes in the operating system. 
- The operating system needs to be regularly informed about resources’ and processes’ statuses since managing the resources of the computer system for the processes is a part of its purpose. 
- The common approach to this issue is the creation and updating of the status table for every process and resource and objects which are relevant such as the files, I/O devices and so on:
1.  Memory tables are one such example as they consist of information regarding how the main memory and the virtual or the secondary memory has been allocated to each of the processes. It may also contain the authorization attributes given to each process for accessing the shared memory areas.
2.   I/O tables are another such example of the tables. The entries in these tables state about the availability of the device required for the process or of what has been assigned to the process. the status of the I/O operations taking place is also mentioned here along with address of the memory buffers they are using.
3.   Then we have the file tables that contain the information regarding the status of the files and their locations in memory.
4. Lastly, we have the process tables for storing the data that the operating systems require for the management of the processes. The main memory contains at least a part of the process control block even though its configuration and location keeps on varying with the operating system and the techniques it uses for memory management.
- Physical manifestation of a process consists of program data areas both dynamic and static, instructions, task management info etc. and this is what that actually forms the process control block. 
- PCB has got a central role to play in process management. 
- Operating system utilities access and modify it such as memory utilities, performance monitoring utilities, resource access utilities and scheduling utilities etc. 
- The current state of the operating system is defined by the set of process control blocks. 
- It is in the terms of PCBs that the data structuring is carried out. 
- In today’s sophisticated operating systems that are capable of multi-tasking, many different types of data items are stored in process control block. 
- These are the data items that are necessary for efficient and proper process management. 
- Even though the details of the PCBs depend up on the system, the common parts can still be identified and classified in to the following three classes:
1.  Process identification data: This includes the unique identifier of the process that is usually a number. In multi-tasking systems it may consists of user group identifier, parent process identifier, user identifier and so on. These IDs are very much important since they let the OS cross check with the tables.
2.   Process state data: This information defines the process status when it is not being executed. This makes it easy for the operating system to resume the process from appropriate point later. Therefore, this data consists of CPU process status word, CPU general purpose registers, stack pointer, frame pointers and so on.
3.   Process control data: This includes process scheduling state, priority value and amount of time elapsed since its suspension. 


Facebook activity