Subscribe by Email


Showing posts with label Kernel. Show all posts
Showing posts with label Kernel. Show all posts

Tuesday, May 21, 2013

Define the Virtual Memory technique?


Modern operating systems come with multitasking kernels. These multitasking kernels often run in to the problems related to memory management. Physical memory does not suffice for them to execute the tasks assigned to them because of being fragmented. So they have to take some additional from the secondary memory. But they cannot use this memory directly. Virtual memory offers a solution to this problem. 

What is Virtual Memory technique?

- Using this technique makes the fragmented main memory available to the kernels as a contiguous main memory. 
- Since it is really not the main memory but just appears to be, it has been named as the virtual memory and this technique is called the virtual memory technique. 
- Since, it helps in managing the memory, it is essential a memory management technique. 
- The main storage gets fragmented because of many programming and processing problems. 
- The main memory available to the processes and the tasks is virtualized by the virtual memory technique and then it appears to the process as a contiguous memory location. 
- This memory is a global address space. 
- Virtual address spaces such as these are managed by the operating system. 
- The real memory is assigned to the virtual memory by the operating system itself. 
- The virtual addresses of the allocated virtual address spaces are translated in to the physical addresses automatically by the CPU. 
- It achieves this with the help of some memory management hardware specially designed for this purpose. 
- The processes continue to execute uninterrupted as long as this hardware properly translates the virtual addresses in to real memory addresses properly. 
- If it fails in doing so at any point of time, the execution comes to a halt and the control is transferred to the operating system. 
- The duty of the operating system now is to move the requested memory page to the main memory from the backing store. 
- Once done with this, it then returns the control again to the process that was interrupted. 
- It greatly simplifies the whole execution process. 
- Even if the application would require more data or code that would fit in real memory, it does not have to be moved to and fro between the backing store and the real memory. 
- Furthermore, this technique also offers protection to the processes that are provided distinct address spaces by the isolation of the memory allocate to them from other tasks.
- Application programming has been made a lot easier with the help of the virtual memory technique since it hides the fragmentation defects of the real memory. 
- The burden of memory hierarchy management is delegated to the kernel which eliminates the need for the explicit handling of the overlays by the program. 
- Thus each process can execute in an address space that is dedicated to it. 
- The need for relocating the code of the program is obviated along with using relative addressing for accessing the memory. 
- The concept of virtual memory was generalized and eventually named as memory virtualization. 
- Gradually, the virtual memory has become an inseparable part of the architecture of the modern computers. 
- For implementing it, dedicated hardware support is absolutely necessary. 
- This hardware is built in to the CPU in some sort of memory management hardware. - If required for boosting the performance of the virtual memory, some virtual machines and emulators may employ some additional hardware support. 
- The older mainframe computers did not have any support for the virtual memory concept. 
- In virtual memory technique, each program can solely access the virtual memory.


Wednesday, May 15, 2013

Define a system call? List the different types of the system calls?


- A program can request for services from the kernel of the operating system through a system call. 
- These services include all the following:
  1. Hardware related services such as data access from the HDD.
  2. Creation and execution of processes
  3. Communication with integral services of the kernel such as scheduling.
- An essential interface is provided by the system call that lies between the operating system and the process. 
- The modern microprocessor architecture consists of a security model specifying multiple levels of privileges for the execution of the software. 
- For example, a program has limitations of its own address space so even by accident it does not modify or access the programs that are executing or the OS. 
- This way the direct manipulation of the hardware devices (such as the network devices and frame buffer etc.) by the program will also be avoided. 
However, this is not the only case.
- There are situations where the programs really need access to these devices. - This is why the system calls are made available to the programs. 
- System calls ensure that such operations are safely implemented and are well defined. 
- The level at which the operating system executes is the highest privilege level.
- Therefore, the applications requests for the devices to the operating system through the system calls. 
- The execution of the system calls is carried out through the interrupts that would automatically put the CPU at the required level of the privilege. 
- After this, the control is passed on to the kernel. 
- From here the kernel determines whether or not the requested service should be granted to the calling program. 
- If the program is granted the service it requested, a specific instruction set is executed by the kernel which cannot be directly controlled by the calling program.
- These instructions return the privilege level down to that of the program that invoked the call. 
- Finally, the control is returned to the calling program.
- Generally, an API or a library is provided between the operating system and the normal programs. 
- The purpose of this library is to provide wrapper functions for the calls. 
- The names of these functions are same as that of the system calls. 
- The purpose of these functions is to expose a subroutine through which the system call can be used.
- These functions also provide modularity to the system call. 
- But above all, the primary function of the wrapper is placing the arguments in their proper processor registers that are to be passed on to the system call.
- A unique system call number is set that will be called by the kernel. 
- In this way, the portability is increased by the help of this library.
- Making direct system calls in the code of the application is very difficult and so requires embedded assembly code.
- In systems based up on ex-kernel, library is as important as the intermediary as they provide resource management, abstractions and shields the application from a low level kernel. 
- A control transfer involving features specific to the architecture is required for the implementation of the system call. 
- A way of implementing this is by using a software trap. 
- RISC processors can only use the implementation technique via interrupts. 
But there are some other additional techniques for processors with CISC architectures. 
- One example is of the following two sets developed independently by Intel and AMD respectively for serving the same purpose:
SYSRET/ SYSEXIT
SYSCALL/ SYSENTER
- These were actually control transfer instructions and very fast indeed. 


What is the Process Control Block? What are its fields?


The task controlling block, switch frame or task struct are the names of one and the same thing that we commonly called as the PCB or the process control block. 
This data structure belongs to the kernel of the operating system and consists of the information that is required for managing a specific process. 
- The process control block is responsible for manifesting the processes in the operating system. 
- The operating system needs to be regularly informed about resources’ and processes’ statuses since managing the resources of the computer system for the processes is a part of its purpose. 
- The common approach to this issue is the creation and updating of the status table for every process and resource and objects which are relevant such as the files, I/O devices and so on:
1.  Memory tables are one such example as they consist of information regarding how the main memory and the virtual or the secondary memory has been allocated to each of the processes. It may also contain the authorization attributes given to each process for accessing the shared memory areas.
2.   I/O tables are another such example of the tables. The entries in these tables state about the availability of the device required for the process or of what has been assigned to the process. the status of the I/O operations taking place is also mentioned here along with address of the memory buffers they are using.
3.   Then we have the file tables that contain the information regarding the status of the files and their locations in memory.
4. Lastly, we have the process tables for storing the data that the operating systems require for the management of the processes. The main memory contains at least a part of the process control block even though its configuration and location keeps on varying with the operating system and the techniques it uses for memory management.
- Physical manifestation of a process consists of program data areas both dynamic and static, instructions, task management info etc. and this is what that actually forms the process control block. 
- PCB has got a central role to play in process management. 
- Operating system utilities access and modify it such as memory utilities, performance monitoring utilities, resource access utilities and scheduling utilities etc. 
- The current state of the operating system is defined by the set of process control blocks. 
- It is in the terms of PCBs that the data structuring is carried out. 
- In today’s sophisticated operating systems that are capable of multi-tasking, many different types of data items are stored in process control block. 
- These are the data items that are necessary for efficient and proper process management. 
- Even though the details of the PCBs depend up on the system, the common parts can still be identified and classified in to the following three classes:
1.  Process identification data: This includes the unique identifier of the process that is usually a number. In multi-tasking systems it may consists of user group identifier, parent process identifier, user identifier and so on. These IDs are very much important since they let the OS cross check with the tables.
2.   Process state data: This information defines the process status when it is not being executed. This makes it easy for the operating system to resume the process from appropriate point later. Therefore, this data consists of CPU process status word, CPU general purpose registers, stack pointer, frame pointers and so on.
3.   Process control data: This includes process scheduling state, priority value and amount of time elapsed since its suspension. 


Saturday, May 4, 2013

What is Context Switch?


- The context switch refers to the process that involves storing and restoring of the context or the state of the process. 
- This makes it possible to resume the execution of the process from that same saved point in the future. 
- This is very important as it has enabled the various processes for sharing one CPU and therefore it represents one of the essential features of an operating system that is capable of multi – tasking. 
- It is the operating system and the processors which decide what will constitute the context. 
- One of the major characteristic of the context switches is that they are computationally very intensive.
- Most of the designing of the operating systems is concerned with the optimization of the use of these switches. 
- A finite amount of time is required for switching from one process to another one. 
- This time is spent in the administration of the process which includes saving and loading of the memory maps, registers etc. plus the various lists and tables are updated. 
- A context switch may mean either of the following:
Ø  A register context switch
Ø  A task context switch
Ø  A thread context switch
Ø  A process context switch

Potential Triggers for a Context Switch

There are three potential triggers for a context switch. A switch can be triggered in any of the three conditions:

1. Multi-tasking: 
- It is common that one process has to be switched out of the processor so as to execute another process. 
- This is done by the use of some scheduling scheme. 
- Here, if the process makes itself un-executable, then it can trigger this context switch. 
- The process can do this by waiting for synchronization or an I/O operation to finish. 
- On a multitasking system that uses pre-emptive scheduling, the processes that are still executable might be switched out by the scheduler. 
- A timer interrupt is employed by some of the preemptive schedulers to avoid process starving of the CPU time.
- This interrupt gets triggered when the time slice is exceeded by the process. - Furthermore, this interrupt makes sure that the scheduler will be able to gain control for switching.

2. Interrupt handling: 
- Modern architectures are driven by the interrupts. 
- This implies that the CPU can issue the request while continuing with some other execution and without waiting for the current read/ write operation to get over. 
- When the currently executing operation is over, the interrupt fires and presents the result to the CPU. 
- Interrupt handler is used for handling the interrupts. 
- The interrupts are handled by this program directly from the disk. 
- A part of the context is automatically switched by the hardware up on the occurrence of an interrupt. 
- This context is enough for the handling program to go back to the code that raised the interrupt.
- The additional context might be saved by the handler as per the details of both the software and hardware designs. 
- Usually, only a required small context’s part is changed so as to keep the amount of time required for handling as minimum as possible. 
- Kernel does not take part in scheduling a process that would handle the interrupts.

3. User and kernel mode switching: 
- A context switch is not required for making a transition between the kernel mode and the user mode in an operating system.
- A mode transition in itself cannot be considered to be a context switch. 
- But, it depends on the OS whether or not the context switch will take place. 


Sunday, April 21, 2013

What is a virtual memory?


- Virtual Memory is a memory management technique that is a compulsory requirement for the multi-tasking kernels. 
- With this technique, the architecture of a computer can be virtualized to different types of computer data storage such as disk drive storage and RAM i.e., the random access memory. 
- With this, programmers do not have to worry about designing applications that will suit this kind of storage. 
- The programs can be designed keeping in consideration only one kind of memory i.e., the virtual memory. 
- This memory behaves just like the usual memory but more than that. 
- It offers a direct as well as contiguous memory space for various operations. - Some of us might think that the programming the software might get difficult with the virtual memory. But this is not so.
- Instead the task becomes easy because the fragmentation of the main physical memory is hidden. 
- For achieving this, the burden of the management of the memory hierarchy is delegated to the kernel.
- This has another added advantage which is that the need for handling of overlays in an explicit way via program is eliminated.
- The need for the relocation of a program code or accessing the memory is obviated via relative addressing. 
- This lets the process to be executed in its own dedicated space. 
- The concept of the virtual memory in a more generalized form is called the memory virtualization.
- The modern computer architecture cannot do without the virtual memory. 
The only requirement for implementing the virtual memory is the hardware support that is provided through the memory management unit that is in-built in CPU. 
- For increasing the performance of these virtual memory implementations hardware support can be employed by the virtual machines and emulators. 
Computer systems with old operating systems such as DOS in mainframes  do not possess any functionality of the virtual memory. 
- The first computer that featured the virtual memory was the Apple Lisa that was designed in the year of 1980. 
- It appears that with the use of virtual memory as if every program has a sole access to it. 
- However, there were some older operating systems that had single address space Oss. 
- These operating systems used to process tasks in a single space. 
- This space is consisted of the virtual memory. 
- Very consistent response times are a requirement of the special purpose computer systems such as the embedded systems. 
- These systems do not prefer to use the virtual memory as it may decrease the determinism. 
- The unpredictable traps producing unwanted jitter while carrying out the I/O operations might be triggered by the virtual memory systems. 
- This happens because the cost of the embedded hardware is kept low. 
- The operations are included in the software rather than including them in the hardware. 
- This technique is termed as the bit banging. 
- The older programs needed to have logic for the management of both primary and secondary memory. 
- One such logic was that of the overlaying. 
- Therefore, virtual memory was introduced as a method for extending the primary memory and make this extension easy for the programmers.  
- In order to allow multi–tasking and multi–programming, the memory in the early systems was divided between many programs. 

Implementation of the virtual memory saw many problems. One among those problems was of the dynamic address translation that was difficult to be implemented and  quite expensive also. 


Saturday, April 13, 2013

What are different components of operating system? – Part 4



8. Disk access and file systems: 
- Through this component of the operating system, the users as well as the programs they use are able to sort and organize files on a computer system. 
This is done through the use of folders or directories. 
- This is another central feature of the operating systems. 
- Data is stored by the computers on disks in the form of files which are then structured in some certain predefined ways so as to enable faster access, better use of available memory and higher reliability. 
- These specific ways of the storing the data on disk together constitute the file system. 
- This makes it possible to assign names and attributes to the files. 
- This in turn helps in maintaining the hierarchy of directories and folders in the directory tree. 
- Only one type of file system and one disk drive was supported by the old operating systems. 
- Those file systems had a limited capacity in terms of directory structures and file names that could be used and speed. 
- These limitations were actually the reflection of the operating system’s limitations making it difficult for it to support multiple file systems. 
- There are other simple operating systems that have a limited range of storage accessing options.
- On the other hand Unix and Linux like operating systems support the VFS technology or virtual file system. 
- Unix offers support for a wide range of storage devices irregardless of their file system or design. 
- This enables them to be accessed via a common API (application programming interface). 
- For programs this avoids the need of having knowledge about the devices which they may require to access.
- With virtual file system, the OS can provide access of unlimited devices to the programs having many file systems operating in them.
- This it does through the use of file system drivers and other device drivers. 
- A device driver lets you access a connected storage device such as flash drives. 
- Every drive has a specific language which is only understood by the device driver and it translates it in to a standard one used by the OS for accessing the drives.
- The contents of the drive can be accessed by the kernel only if the device driver is in place. 
- The purpose of the file system driver is to translate the commands used for accessing the file systems in to the standard set recognized by the operating system. 
- These file systems are then dealt by the programs based on their directories or folders and file names organized in hierarchy. 
- These files can be created, deleted or modified by the programs.

9. Device Drivers: 
- This specifically developed computer software enables the interaction with the hardware devices. 
- It creates an interface through which communication can be done with the device via communications sub system or bus i.e., the means through which the hardware is connected to the system. 
- This computer program depends up on the hardware but is also specific to the operating system.
- It enables an applications software package running under kernel in order to interact with the hardware device transparently.
- Further, it raises the requisite interrupt required for handling asynchronous time dependent interfacing needs of the hardware. 
- Abstraction is the key goal of the device drivers.
- Every hardware model is different and so the operating system cannot know about how each device will be controlled. 
- The way the devices should be controlled is now dictated by the operating systems as a solution to this problem. 
- Therefore, translating the function calls from the OS into calls specific to the device becomes the purpose of the device drivers. 
- Any device would function properly if a device driver suitable to it is available which will ensure the normal operation of the device from the viewpoint of the OS. 

Read the next post "What are different components of operating system? – Part 5"


Thursday, April 11, 2013

What are different components of operating system? – Part 3



6.   Virtual memory: 
- This is that component of the operating system that lets it trick programs to use the memory that scattered all around the RAM and hard disk as one continuous memory location. 
- This chunk of memory is called the virtual memory.
- Using various means of virtual memory addressing such as segmentation and paging implies that it is up to the kernel for choosing the type of memory that might be used by the program at a given time. 
- This enables the use of same memory locations by the operating system for executing multiple tasks. 
- If a program makes an attempt for accessing memory that does not lie in the present range of memory that is accessible but has not been allocated, an interrupt is raised for the kernel in the same way as it would be done for the program if it would exceed its memory. 
- Such an interrupt is called as a page fault.
- When a page fault is detected by the kernel, the virtual memory range of the application is adjusted through which the access to memory is granted to it. 
- In this way, discretionary power is given to the kernel over the memory storage of a particular application. 
- The frequently accessed memory is stored temporarily on the hard disk or other types of media in today’s operating systems. 
- This is done to make some space for the other programs.
- This process is termed as swapping because a memory area is used by a number of programs and its contents can be exchanged or swapped with that of the other memory locations up on demand. 
- Virtual memory is a way of gaining a perception such that amount of RAM is larger than usual in the system.  

7. Multi – tasking: 
- Execution of multiple programs that are also independent on the same system is termed as multi–tasking. 
- It gives appearance that the tasks are being performed at the same time. 
There are computers that are capable of running at the most two programs simultaneously through the principle of time sharing. 
- A share of computer time is used by each of the programs for their execution. 
A piece of software known as the scheduler is contained in the operating system whose purpose is to determine how much time will be spent for the execution of every program, and in which order it would take place.
- It also determines how the control will be passed to the programs. 
- The kernel then passes the control to the process allowing the program to access the memory and the CPU. 
- Later on, through some mechanism the control is returned to the kernel so that CPU might be used by some other program. 
- This process of passing control between the application and the kernel is referred to as the context switch. 
- The concepts concerning the application preemption have been extended and used by the modern operating systems in order to maintain preemptive control over the internal run times. 
- The philosophy that governs the preemptive multi–tasking is to make sure that regular time of CPU is given to all the programs.

Read the next post "What are different components of operating system? – Part 4"


Wednesday, April 10, 2013

What are different components of operating system? – Part 2


All the components of an OS must exist in order so as to make the different parts of a system work together in cooperation. All the hardware needs of a software application are satisfied only through the operating system, be it as simple as mouse movement or as complex at Ethernet.


4. Modes: 
- A number of modes of operations are supported by today’s CPUs. 
- CPUs having the capability to support multiple modes at the least use the following two basic modes:
Ø  The supervisor mode: The kernel of the operating system uses this mode for accomplishing low level tasks i.e., the tasks that have no restricted access to the computer hardware. Few examples of such tasks are: communicating with devices such as the graphic cards and maintaining a control over the read, write and erase operations of the memory.
Ø  The protected mode: This mode is just the opposite of the previous one and so is use for everything else. Application software which runs under the protected mode can access the computer hardware only by going through the kernel which in turn controls the tasks only in supervisor mode.
- There are other types of modes similar to the above two, such as virtual modes that might be used by the CPUs for emulating old types of processors such as the following: 32 bit processor on a 64 bit processor or a 16 – bit one on a 32 bit one etc.
- The former one is the mode in which a computer runs automatically after start up. 
- The first programs that run include EFI or BIOS, OS and the boot loader. 
These programs require unlimited access to the computer hardware because a protected environment can only be initialized outside of one. 
- The CPU is transferred to protected mode only when the control is passed to some other program by the operating system. 
- When a program is running under the protected mode, the number of CPU instructions to which it is granted access might be very limited. 
- If a user wishes to leave the protected mode, he / she can raise a interrupt that will pass the control again to the operating system. 
- This is how an operating system maintains an exclusive control over the issues concerning access to the memory and the hardware. 
- One or more CPU registers that consist of information that is prohibited to be accessed by the currently executing program are collectively termed as the ‘protected mode resources’. 
- If at all an attempt is made for altering this resource, the system switches to supervisor mode.
- Now it is the OS who deals with such illegal operations. It may kill the program.

5. Memory management: 
- Kernel of a multi–programming OS is held responsible for the management of the system memory that is currently under use by the programs. 
- Kernel ensures that the programs under execution do not interfere with the memory being used by the each other.
- Since time sharing principle is followed; each of the programs is given an independent access to the system memory. 
- Early operating systems used to have a cooperative memory management system. 
- It was assumed that all the programs used the memory manager of kernel voluntarily without exceeding the memory limit assigned to them. 
- However,this system of memory management is extinct now because there are bugs in programs that cause them to exceed the limits.
Failure of a program might cause the memory to be overwritten by the other programs using it. 
- Memory being used by the other programs might be altered by some viruses or malicious code for some purpose which in turn may affect the working of the OS. 
- In such management, the misbehaving programs can crash the whole system. 
- Kernel limits the access to memory for various programs through various methods of memory protection such as paging and segmentation. 

Read the next post (What are different components of operating system? – Part 3)


Tuesday, April 9, 2013

What are different components of operating system? – Part 1


All the components of an OS must exist in order so as to make the different parts of a system work together in cooperation. All the hardware needs of a software application are satisfied only through the operating system, be it as simple as mouse movement or as complex at Ethernet.

1. Kernel: 
- This component of the OS is the medium through which application software can connect to the computer’s hardware system. 
- This component is aided by many device drivers and firmware. 
- With the help of these, it provides a very basic control level for all the hardware devices of a system.
- For programs, the memory access in RAM and ROM is managed by kernel only. 
- It is the authority of the kernel to decide which program should get what access and at what level. 
- The operating state of the CPU is set up the kernel itself all the time. 
- It prepares the data to be stored in long term non–volatile storage like in flash memory, tapes, and disks and so on.

2. Program execution or process: 
- The OS is actually an interface between the hardware and the application software.
- An interaction between the two can be established only if the application abides by the rules and procedures of the operating system as programmed in to it. 
- It is another purpose of the operating system to provide a set of services for simplifying the execution as well as development of the programs. 
- Whenever a program is to be executed, a process is created by the kernel of the operating system and then other required resources such as memory are assigned to it. 
- A priority is assigned to this process if it is a multi–tasking environment. 
- The binary code for the program is loaded in to the memory and the execution is initiated.

3. Interrupts: 
- This component is the central requirement of the operating systems. 
- This is so because the interrupts provide a way of interaction between the OS and its environment which is not only reliable but also effective. 
- The older operating systems worked with very small stacks and therefore watched for the various input sources requiring action for initiating some event (called polling).
- This strategy is not useful in today’s OS that use very large stacks. 
- Here, interrupt - based programming is fruitful. 
- The modern CPU have in – built direct support for these interrupts. 
- With the use of interrupts, a computer knows when to automatically save the contexts of the local registers or run some specific code in response to the occurring events. 
- Hardware interrupts are supported by very basic of the computers. 
Interrupts let the programmer to specify what code is to be run when a certain even occurs. 
- When hardware receives an interrupt, it automatically suspends the current program being executed by it. 
- The status of the program is saved and the code associated with the interrupt is executed. 
- In modern operating systems, the kernel is responsible for handling the interrupts. 
- Either of the running program or the computer’s hardware might raise an interrupt. 
- When an interrupt is triggered by a hardware device, it is left to the OS’s kernel to decide what to do now through execution of some code. 
- The code that has to be run is decided based up on the interrupt’s priority. 
The device driver is the software to which all the task of the processing hardware interrupts is assigned. 
- A device driver might be a part of the kernel, or some other program or both.


Tuesday, October 11, 2011

What are the problems of a C compiler?

Every programming language has some advantages and disadvantages which are not present in the other languages. Some languages are capable of solving some problems better as compared to other languages. But the point is that most of the problems have similar needs, requirements and logic, so the point where languages differ from each other is the efficiency with which they solve the problem.

Some provide more efficient and fluent solutions while others don’t. As we all know C is a basic programming language which was developed to solve problems and develop programs relating to kernels of the operating systems, compilers and graphical user interfaces. C compiler though being fast is not so efficient. It provides many downsides for a large number of problems. Many of us think that C is the fastest language but this is not true.

C++ compiles most of the C programs at the same speed as C++ does. Some of the features of C language like virtual function calls result in over heads. C is not object oriented. This in turn makes it more inconvenient to implement some programs. It is not able to force object orientation everywhere. C makes some programs that require object oriented programming more error prone. C has got a weak typing system as compared to other programming languages. This leads to many programming errors after compilation of the program.

A bigger standard library C++ allows the full use of the C standard library. This is very important of course, as the C standard library is an invaluable resource when writing real world programs.
- C++ has a library called the Standard Template Library.
- This standard library contains a number of templates that can be used while developing programs almost of any kind.
- It also includes many common data structures which are very useful like data lists, maps, data sets, etc. the standard library routines and its data structures are tailored to the specific needs of the programmer.
- Though standard library is no gold knife, still it does gives a great help in many programs for solving general purpose related problems. Many tasks like implementing a linked list in C take a lot of time.
- Though the compilation is fast but, who has got that much time to write those lengthy codes. That time you will feel the need for a better compiler which provides a shorter and effective code for implementing lists and other sorts of data structures in array form.

Even though C language was standardized long back in 1998, but till date we don’t have any good compilers for this language. It’s a very complex language and so requires a heavy compiler like itself to compile it. Another problem related to C compiling is that being a big and complex language, many few people have its correct knowledge of usage. A lot depends on the programmer also. If you are typing in the wrong code you are sure to get bad results.
Today also most of the programs are written in C. a need is felt to convert them to another programming language for better compilation. But the conversion is no good solution. Conversion of a C language program code is often ended up in rewriting almost the entire content of the program.

Programs written in C will always have 2 major problems. Firstly their code will be unusually lengthy and time consuming. Secondly, the program execution will be slower even though the compilation time is very less. The C program codes require more time to read, write and understand. C compiler being ineffective can crash any time. There’s a lack of high level routines in C compiler.


Facebook activity