For ease of calculation, you can assume IS=I COO, IM=KOOK, etc. ) (a) (3 opts) How many physical frames are there in the system? Mans: MM = ASK K (b) (3 opts) What is the maximum number of pages that a process can have in its virtual address space? Mans: The amount Of virtual address space addressable by each process = 232 = KGB. Therefore, maximum number Of pages = G = I M K (c) (12 opts) Suppose the page table for a process P contains the following entries.
Page No. 32 1 0 Frame No. 1 347 Specify what physical address does each of the following virtual addresses maps to? Show your calculations here. 3456 Mans: Note that a virtual address can be represented in terms of its page number and offset within the page as follows: Virtual address Page no. * Page size * offset Thus, given a virtual address, it can he converted into its page number and offset as follows: Page number = Address diva (Page size) Offset – Address mod (Page size) Here, diva and mod correspond to the quotient and remainder from the division. Thus, for address 3456, the page number is 3456 diva K – 0, and the offset is 3456 mod K- 3456. To get the physical address, first the page number should be converted to the responding frame no. Room the page table.
In this case, the frame no. For page O is 7. Then, the physical address gram no, * page size offset. Here, this would be 7*K+3456 = 31456. Ii. 13456 Mans: Page number= 1345614K = 3 Offset = 13456 mod K 1456. Physical frame corresponding to Page 3 = I.
Physical address = 1*K+1456 = 5456. 2. (a) (4 opts) Threads from the same process share the global data and the heap.
Give one advantage and one disadvantage Of this sharing. Mans: Advantage: Sharing the global data and the heap makes very efficient for threads to communicate and to share data With each Other.Disadvantage: Sharing data in this manner can lead to race conditions and possible data corruption without proper synchronization. (b) (4 opts) What is the reason for threads from the same process to have separate stacks? Give one possible drawback for the fact that all these stacks reside in the same address space. Mans: Each thread has its own stack to keep track of its own control flow for the execution path it is following. Giving each thread its own stack allows different threads to be performing different activities within a process at the same time, and helps increase parallelism.
Possible drawback: The amount of stack space available to each thread is constrained by the number of threads that can be executing at the same time, Also, since these stacks are in the same address space, a thread can access the stack of another thread, and there could be stack overflows caused by bugs, creating security and memory corruption problems. 3. Suppose you have 10 threads numbered 0-9, where thread executes the following piece of code: off(i); bar(i); Here, off and bar are functions defined elsewhere.The threads run concurrently, and their order of execution or the interleaving of their instructions is non-deterministic. Poor each of the following, show how you will modify the code for thread i using semaphores to achieve the desired execution behavior.
Note: For each semaphore that you use, show where you will add its wait and/or signal operations, and also specify its initial value. Also Note: You can use pseudopodia instead Of POSIX/C syntax for your solution. (a) (6 opts) Have each thread execute its code (both off and bar) in a mutually exclusive manner.The order in which the threads execute does not matter. Mans: This is a classical critical section problem, and we basically need a mute lock here. Recall that a semaphore with initial value of 1 can be used identically too mute lock (since it allows only 1 thread to be in the critical section at a time). So the solution is as follows.
Declare a global semaphore: semaphore seem-l; Code for thread i: wait(seem); off(i); bar(i); signal(seem); (b) (12 opts) Have each thread execute too in a mutually exclusive manner, but allow Upton 5 theme to execute bar concurrently.The order in which the threads execute does not matter. Mans: Here, executing too is again a classical critical section problem, that can be solved similar to part However, executing bar allows multiple threads to be in the radical section, and this can be achieved by initializing the semaphore with the desired number.
(Solution cont’d. On next page) Declare 2 global semaphores: semaphore semaphore Code for thread i: wait(off_seem); off(i); signal(off_seem); bar(i); (c) (12 opts) Starting with thread C, have thread i execute its code (both too and bar) before thread + 1, i. . , the desired order of execution should be: off(O) bar(O) off(l) bar(l) off(g) bar(g) Mans: Here, the semaphores are used to synchronize the order of execution of different threads. We have seen examples in the class and in the textbook Of how to order the execution f two threads using semaphores.
Here, we will extend that to 10 threads. Have a global array Of semaphores: semaphore seem[ICC]; Initialize Code for thread i: off(i); bar(i); Note: You can have code where thread O does not have the wait statement, or/and thread 9 does not have the signal statement.Also, I’ll give credit if you have instead of I’ll also accept answers where you don’t have an array but have shown different semaphores for each thread and explained how it will work. (d) (24 opts) Starting with thread O, have thread execute off before thread i 1 executes off. Once all threads have executed off, reverse the order of execution for bar, i. E. , have thread execute bar before thread – 1, starting with thread 9.
In other words, the desired order to execution should be: too(O) off(l) too(9) bar(9) bar(8) bar(C) Mans: The solution here is very similar to (c), except that we now need 2 sets of semaphores to order the thread executions, once going forward, and the other going backward. The use of the 2 sets of semaphores in this case, will simply be mirror images of each them Have two global arrays of semaphores: semaphore semaphore Initialize Initialize Code for thread i: off(i); C]); bar(i); Again, I’ll accept variations on the above answers, with similar caveats as in 4. (20 opts) Suppose you are using a Web browser for shopping at an online bookstore.
Assume that the underlying network supports the TCP/IP family of protocols. (a) If the network loses many packets, in what way will this impact your online shopping experience? Explain your answer. Mans: The Web connection uses HTTP protocol which runs over TCP. TCP tries to provide a connection.
Oriented, reliable communication channel over the connectionless, unreliable IP normatively communication. If the network loses many packets, TCP layer ill have to retransmit lots of packets again in order to maintain its connection- oriented, reliable view of the network.This will make the connection to the Web server appear really slow to the user. (b) If the machine hosting the online store is replaced with another machine having a different IP address, would clients still be able to find and access the website? Explain your answer, Arras: Yes.
The Domain Name Service (DNS) translates hostesses and URL into machine IP addresses, So, even though the Web server moves to a machine with a different IP address, clients will be able to find the new IP address through DNS and be able to connect to it.The address mapping itself may take some time to be visible to the clients, so there may be a short time period while the server is unavailable to the clients. (c) If the server for the online store started listening on a random port (instead of the default HTTP port 80), would clients still be able to find and access the website? Explain your answer _ Mans: NO. There is no mechanism for clients to find the port number for the server they are interested in connecting to, and clients typically rely on using well-known port numbers for standard services (such as port 80 for HTTP).