Tuesday, January 15, 2008

Exercise 1-6

1.) What are the major differences between
deadlock, starvation, and race?
- Deadlock is more serious than indefinite postponement or starvation because it affects more than one job. Because resources are being
tied up, the entire system (not just a few programs) is affected.
- Starvation the result of conservative allocation of resources in which a single job is presented from execution because it's kept waiting for
resources that never become available.
- Race a synchronization problem between processes vying for the same resource.

2.) Give some real life examples of deadlock,
starvation, and race. Real-life Example of deadlock
- Client applications using the database may require exclusive access to a table, and in order to gain exclusive access they ask for a lock
- Bridge traffic can only be in one direction.
- Each entrance of the bridge can be viewed as a resource.
- If a deadlock occurs, it can be resolved if one car backs up (resource preemption(.
- a text formatting program that accepts text sent to it to be processed and then returns the results.

3.) Select one example of deadlock from exercise 2
and list the four necessary conditions needed for
the deadlock.
- Client applications using the database may require Exclusive access to a table, and in order to gain exclusive
access they ask for a lock. If one client application holds a lock on a table and attempts to obtain the lock on a second table that is
already held by a second application then attempts to obtain the lock that is held by the first application. (But this particular type of
deadlock is easily prevented, e.g., by using an all-or-none
resource allocation algorithm.)

4.) Suppose the narrow staircase has become a major source of aggravation. Design an algorithm for using it so that both deadlock and starvation are not possible.
-
5.)
A. Can deadlock occur? How can it happen an under what
circumstances?
- A deadlock can also occur if two processes access and lock
records in a database.

B. How can deadlock be detected?
- Deadlock can be detected by building directed resource
graphs and looking for cycles.

C. Give a solution to prevent deadlock but watch out for starvation.
- To prevent a deadlock the operating system must eliminate one of the four necessary conditions, a task complicated by the fact that the same condition can't be eliminated from every resource.
6.)
A. Is this system deadlocked?
- This system is deadlock because the graph can't be
completely reduce.

B. Are there any blocked processes?
- there's block process in this system.

C. What is the resulting graph after deduction by P1?
- The links between R1 and P1 can be removed, so R1 is
Released and allocated to P2, then P2 has all of its
requested resources can finish successfully.

D. What is the resulting graph after deduction by P2?
- The links between After P2 deducted, finally R2 is released and allocated to P1.

E. Both P1 and P2 have requested R2:
1. What is the status of the system if P2's request is granted before P1's?
- The status of the system if P2's request is granted then P2 finished and release R2, so finally P1 granted his request and finished.

2. What is the status of the system if P1's request is granted before P2's?
- The status of the system if P1's request is granted before P2's, P1 released R1, then P2 has all of it's requested resource is granted successfully.
The First PC Operating System

In 1974, Dr. Gary A. Kildall, while working for Intel Corporation, created CP/M as the first operating system for the new microprocessor. By 1977, CP/M had become the most popular operating system (OS) in the fledgling microcomputer (PC) industry. The largest Digital Research licensee of CP/M was a small company which had started life as Traf-0-Data, and is now known as Microsoft. In 1981, Microsoft paid Seattle Software Works for an unauthorized clone of CP/M, and Microsoft licensed this clone to IBM which marketed it as PC-DOS on the first IBM PC in 1981, and Microsoft marketed it to all other PC OEMs as MS-DOS.


This paper describes the first operating system was designed for the first internally programmed electronic digital computer: the EDVAC. This operating system was developed during 1952 and was implemented early in 1953. Devised for one of the earliest electronic digital computers, its capacities were modest compared to later standards, yet some of its features are recognizable in later operating systems. It was planned carefully, and it was comprehensive and useful in the context of its time.

Thursday, December 13, 2007

case study IT-222 Operating system

Memory Management in Linux

Since the early days of computing, there has been a need for more memory than there exists physically in a system. Strategies have been developed to overcome this limitation and the most successful of these is virtual memory. Virtual memory makes the system appear to have more memory than it actually has by sharing it between competing processes as they need it. This sleight of hand is invisible to those processes and to the users of the system. Virtual memory allows large address spaces, fair physical memory allocation, protection and shared virtual memory . The figure below shows a typical memory hierarchy.



Overview of memory management

Traditional Unix tools like 'top' often report a surprisingly small amount of free memory after a system has been running for a while. For instance, after about 3 hours of uptime, the machine I'm writing this on reports under 60 MB of free memory, even though I have 512 MB of RAM on the system. Where does it all go?

The biggest place it's being used is in the disk cache, which is currently over 290 MB. This is reported by top as "cached". Cached memory is essentially free, in that it can be replaced quickly if a running (or newly starting) program needs the memory.

The reason Linux uses so much memory for disk cache is because the RAM is wasted if it isn't used. Keeping the cache means that if something needs the same data again, there's a good chance it will still be in the cache in memory. Fetching the information from there is around 1,000 times quicker than getting it from the hard disk. If it's not found in the cache, the hard disk needs to be read anyway, but in that case nothing has been lost in time.

IT-222 Operating system

Memory Management in Linux

Since the early days of computing, there has been a need for more memory than there exists physically in a system. Strategies have been developed to overcome this limitation and the most successful of these is virtual memory. Virtual memory makes the system appear to have more memory than it actually has by sharing it between competing processes as they need it. This sleight of hand is invisible to those processes and to the users of the system. Virtual memory allows large address spaces, fair physical memory allocation, protection and shared virtual memory . The figure below shows a typical memory hierarchy.



Overview of memory management

Traditional Unix tools like 'top' often report a surprisingly small amount of free memory after a system has been running for a while. For instance, after about 3 hours of uptime, the machine I'm writing this on reports under 60 MB of free memory, even though I have 512 MB of RAM on the system. Where does it all go?

The biggest place it's being used is in the disk cache, which is currently over 290 MB. This is reported by top as "cached". Cached memory is essentially free, in that it can be replaced quickly if a running (or newly starting) program needs the memory.

The reason Linux uses so much memory for disk cache is because the RAM is wasted if it isn't used. Keeping the cache means that if something needs the same data again, there's a good chance it will still be in the cache in memory. Fetching the information from there is around 1,000 times quicker than getting it from the hard disk. If it's not found in the cache, the hard disk needs to be read anyway, but in that case nothing has been lost in time.

Tuesday, November 27, 2007

Research Topic 2 - IT 222 OPERATING SYSTEMS

Consider both Windows and UNIX operating systems.
Compare and contrast how each implements virtual memory

Virtual memory in UNIX
Virtual memory is an internal “trick” that relies on the fact that not every executing task is always referencing it’s RAM memory region. Since all RAM regions are not constantly in-use, UNIX has developed a paging algorithm that move RAM memory pages to the swap disk when it appears that they will not be needed in the immediate future.

RAM demand paging in UNIX

As memory regions are created, UNIX will not refuse a new task whose RAM requests exceeds the amount of RAM. Rather, UNIX will page out the least recently referenced RAM memory page to the swap disk to make room for the incoming request. When the physical limit of the RAM is exceeded UNIX can wipe-out RAM regions because they have already been written to the swap disk.

When the RAM region is been removed to swap, any subsequent references by the originating program require UNIX copy page in the RAM region to make the memory accessible. UNIX page in operations involve disk I/O and are a source of slow performance. Hence, avoiding UNIX page in operations is an important concern for the Oracle DBA.

The Main functions of paging are performed when a program tries to access pages that do not currently reside in RAM, a situation causing page fault:

1. Handles the page fault, in a manner invisible to the causing program, and takes control.

2. Determines the location of the data in auxiliary storage.

3. Determines the page frame in RAM to use as a container for the data.

4. If a page currently residing in chosen frame has been modified since loading (if it is dirty), writes the page to auxiliary storage.

5. Loads the requested data into the available page.

6. Returns control to the program, transparently retrying the instruction that caused page fault.

The need to reference memory at a particular address arises from two main sources:

  • Processor trying to load and execute a program's instructions itself.
  • Data being accessed by a program's instruction.

In step 3, when a page has to be loaded and all existing pages in RAM are currently in use, one of the existing pages must be swapped with the requested new page. The paging system must determine the page to swap by choosing a one that is least likely to be needed within a short time. There are various page replacement algorithms that try to answer such issue.

Most operating systems use the least recently used (LRU) page replacement algorithm. The theory behind LRU is that the least recently used page is the most likely one not to be needed shortly; when a new page is needed, the least recently used page is discarded. This algorithm is most often correct but not always: e.g. a sequential process moves forward through memory and never again accesses the most recently used page.

Most programs that become active reach a steady state in their demand for memory locality both in terms of instructions fetched and data being accessed. This steady state is usually much less than the total memory required by the program. This steady state is sometimes referred to as the working set: the set of memory pages that are most frequently accessed.

Virtual memory systems work most efficiently when the ratio of the working set to the total number of pages that can be stored in RAM is low enough to minimize the number of page faults. A program that works with huge data structures will sometimes require a working set that is too large to be efficiently managed by the page system resulting in constant page faults that drastically slow down the system. This condition is referred to as thrashing: a page is swapped out and then accessed causing frequent faults.

An interesting characteristic of thrashing is that as the working set grows, there is very little increase in the number of faults until the critical point, when faults go up dramatically and majority of system's processing power is spent on handling them.

Virtual memory is an internal “trick” that relies on the fact that not every executing task is always referencing it’s RAM memory region. Since all RAM regions are not constantly in-use, UNIX has developed a paging algorithm that move RAM memory pages to the swap disk when it appears that they will not be needed in the immediate future.

RAM demand paging in UNIX

As memory regions are created, UNIX will not refuse a new task whose RAM requests exceeds the amount of RAM. Rather, UNIX will page out the least recently referenced RAM memory page to the swap disk to make room for the incoming request. When the physical limit of the RAM is exceeded UNIX can wipe-out RAM regions because they have already been written to the swap disk.

When the RAM region is been removed to swap, any subsequent references by the originating program require UNIX copy page in the RAM region to make the memory accessible. UNIX page in operations involve disk I/O and are a source of slow performance. Hence, avoiding UNIX page in operations is an important concern for the Oracle DBA.

The Main functions of paging are performed when a program tries to access pages that do not currently reside in RAM, a situation causing page fault:

1. Handles the page fault, in a manner invisible to the causing program, and takes control.

2. Determines the location of the data in auxiliary storage.

3. Determines the page frame in RAM to use as a container for the data.

4. If a page currently residing in chosen frame has been modified since loading (if it is dirty), writes the page to auxiliary storage.

5. Loads the requested data into the available page.

6. Returns control to the program, transparently retrying the instruction that caused page fault.

The need to reference memory at a particular address arises from two main sources:

  • Processor trying to load and execute a program's instructions itself.
  • Data being accessed by a program's instruction.

In step 3, when a page has to be loaded and all existing pages in RAM are currently in use, one of the existing pages must be swapped with the requested new page. The paging system must determine the page to swap by choosing a one that is least likely to be needed within a short time. There are various page replacement algorithms that try to answer such issue.

Most operating systems use the least recently used (LRU) page replacement algorithm. The theory behind LRU is that the least recently used page is the most likely one not to be needed shortly; when a new page is needed, the least recently used page is discarded. This algorithm is most often correct but not always: e.g. a sequential process moves forward through memory and never again accesses the most recently used page.

Most programs that become active reach a steady state in their demand for memory locality both in terms of instructions fetched and data being accessed. This steady state is usually much less than the total memory required by the program. This steady state is sometimes referred to as the working set: the set of memory pages that are most frequently accessed.

Virtual memory systems work most efficiently when the ratio of the working set to the total number of pages that can be stored in RAM is low enough to minimize the number of page faults. A program that works with huge data structures will sometimes require a working set that is too large to be efficiently managed by the page system resulting in constant page faults that drastically slow down the system. This condition is referred to as thrashing: a page is swapped out and then accessed causing frequent faults.

An interesting characteristic of thrashing is that as the working set grows, there is very little increase in the number of faults until the critical point, when faults go up dramatically and majority of system's processing power is spent on handling

Windows, in addition to the RAM, uses a part or parts of the hard disk for storing temporary files and information. These are data that are not required immediately. For example, when you minimize a window, or have an application running in the background. Although Windows management of the virtual memory has grown more efficient, it still tends to access the hard disk very often. Most times absolutely unnecessarily, because it is programmed to keep the RAM free. With a little bit of tricking, you can optimize this access, not only making sure that
Windows uses this feature sparingly and sensibly but speeding up file access generally.

Virtual Memory in Windows NT

The virtual-memory manager (VMM) in Windows NT is nothing like the memory managers used in previous versions of the Windows operating system. Relying on a 32-bit address model, Windows NT is able to drop the segmented architecture of previous versions of Windows. Instead, the VMM employs 32-bit virtual addresses for directly manipulating the entire 4-GB process. At first this appears to be a restriction because, without segment selectors for relative addressing, there is no way to move a chunk of memory without having to change the address that references it. In reality, the VMM is able to do exactly that by implementing virtual addresses. Each application is able to reference a physical chunk of memory, at a specific virtual address, throughout the life of the application. The VMM takes care of whether the memory should be moved to a new location or swapped to disk completely independently of the application, much like updating a selector entry in the local descriptor table (LDT).

Windows versions 3.1 and earlier employed a scheme for moving segments of memory to other locations in memory both to maximize the amount of available contiguous memory and to place executable segments in the location where they could be executed. An equivalent operation is unnecessary in Windows NT's virtual memory management system for three reasons. One, code segments are no longer required to reside in the 0-640K range of memory in order for Windows NT to execute them. Windows NT does require that the hardware have at least a 32-bit address bus, so it is able to address all of physical memory, regardless of location. Two, the VMM virtualizes the address space such that two processes can use the same virtual address to refer to distinct locations in physical memory. Virtual address locations are not a commodity, especially considering that a process has 2 GB available for the application. So, each process may use any or all of its virtual addresses without regard to other processes in the system. Three, contiguous virtual memory in Windows NT can be allocated discontiguously in physical memory. So, there is no need to move chunks to make room for a large allocation.

Wednesday, November 21, 2007

Resarch Topic Operating system

1.) Main Article: Operating System
An operating system (OS) is the software that manages the sharing of the resources of a
computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system. At the foundation of all system software, an operating system performs basic tasks such as controlling and allocating memory, prioritizing system requests, controlling input and output devices, facilitating networking and managing file systems. Most operating systems come with an application that provides a user interface for managing the operating system, such as a command line interpreter or graphical user interface. The operating system forms a platform for other system softwareapplication software. and for
The most commonly-used contemporary desktop OS is
Microsoft Windows, with Mac OS X also being well-known. Linux and the BSD are popular Unix-like systems.
Process management
Every program running on a computer, be it a service or an application, is a process. As long as a von Neumann architecture is used to build computers, only one process per CPU can be run at a time. Older microcomputer OSes such as MS-DOS did not attempt to bypass this limit, with the exception of interrupt processing, and only one process could be run under them (although DOS itself featured TSR as a very partial and not too easy to use solution). Mainframe operating systems have had multitasking capabilities since the early 1960s. Modern operating systems enable concurrent execution of many processes at once via multitasking even with one CPU. Process management is an operating system's way of dealing with running multiple processes

SUMMARY

“The rules for PC buyers are quite different than those governing high-end buyers. IT consumers of high-end equipment weigh features and stability along with vendor credibility, as they assume they will have to hire the expertise to manage any environment they choose. IT consumers of PC equipment tend to stick with the safe lowest-common-denominator choice and leverage their existing expertise on staff.” —Nicholas Petreley, “The new Unix alters NT’s orbit”, NC World
Supercomputing is primarily scientific computing, usually modelling real systems in nature. Render farms are collections of computers that work together to render animations and special effects. Work that previously required supercomputers can be done with the equivalent of a render farm.

Mainframes used to be the primary form of computer. Mainframes are large centralized computers. At one time they provided the bulk of business computing through time sharing. Mainframes and mainframe replacements (powerful computers or clusters of computers) are still useful for some large scale tasks, such as centralized billing systems, inventory systems, database operations, etc. When mainframes were in widespread use, there was also a class of computers known as minicomputers which were smaller, less expensive versions of mainframes for businesses that couldn’t afford true mainframes.
Servers are computers or groups of computers used for internet serving, intranet serving, print serving, file serving, and/or application serving. Servers are also sometimes used as mainframe replacements.
Desktop operating systems are used for personal computers.
Workstations are more powerful versions of personal computers. Often only one person uses a particular workstation (like desktops) and workstations often run a more powerful version of a desktop operating system, but workstations run on more powerful hardware and often have software associated with larger computer systems.

Handheld operating systems are much smaller and less capable than desktop operating systems, so that they can fit into the limited memory of handheld devices.
Real time operating systems (RTOS) are specifically designed to respond to events that happen in real time. This can include computer systems that run factory floors, computer systems for emergency room or intensive care unit equipment (or even the entire ICU), computer systems for air traffic control, or embedded systems. RTOSs are grouped according to the response time that is acceptable (seconds, milliseconds, microseconds) and according to whether or not they involve systems where failure can result in loss of life.
Embedded systems are combinations of processors and special software that are inside of another device, such as the electronic ignition system on cars.

2.)Give at least two reasons why a regional bank might decided to buy six server computers instead of one supercomputer.

· regional bank, instead of buying the supercomputer they choose to have or to use the six server computer because All the data are stored on the servers, which generally have far greater security controls than most clients. Many Server can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data. ) it has the highest processing volumes and resource requirements and has extremely high availability...

· the second reason why they choose to buy six server computer instead of one supercomputer are, into your network – it has faster workflow and tighter security. By centralising databases and files, it's easier to manage, exchange and share information between workstations.