StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Distributed and Parallel Systems - Research Paper Example

Cite this document
Summary
The writer of the paper "Distributed and Parallel Systems" suggests that in distributed and parallel computing, the interconnection between processes plays a significant role. The processors communicate and co-operate in solving a problem or they may run independently…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER98.3% of users find it useful
Distributed and Parallel Systems
Read Text Preview

Extract of sample "Distributed and Parallel Systems"

Distributed and Parallel Systems 1. Introduction Parallel processing is a key in modern computers. Parallel computing is explained as the simultaneous execution of the same task that is usually split up in smaller tasks, on multiple processors in order to obtain results faster. The idea is based on the fact that the process of solving a problem usually can be divided into smaller tasks which may be then carried out simultaneously with some co-ordination. A parallel system is a computer with more than one processor for parallel processing. Although there are many kinds of parallel computers, they are basically distinguished by the kind of interconnection between processor, known as processing elements (PES), and the memory. One major way to classify parallel computers is based on their memory architectures. Shared memory based parallel computing systems have multiple processors that access all available memory as a global address space. Distributed system is a network of a set of asynchronously connected computing devices. Communication in DS is either through shared memory or through messages. In wide-spread distributed systems, work and information are physically distributed, implying that computing needs should be distributed. Along with improvement in response time, this system contributes to offering a local control over data. With this solid background of multiprocessor systems, parallel computing, distributed systems and shared memory; speed-up performance law such as the Amdahl’s law was introduced to throw light on algorithm design for speed-up and operational efficiency of parallel system. Concurrency is another important factor which enables distributed systems to share memory; and to better understand its execution, Dining philosophers’ algorithm Bully’s algorithm and Logical clocks were introduced. 1. Amdahl’s Law The memory organization of parallel system also has a profound impact on algorithm design. In systems in which memory is distributed among the processors, the interconnection topology is the major factor in the design of an algorithm to solve a given problem. As the number of processors become larger, it is increasingly difficult to design algorithms to achieve high speed-up. On the other hand, systems with a global shared memory provide more flexibility for algorithm design, although memory module contention can adversely affect performance and the algorithm should be designed to minimize it (Yuan 1996). In parallel computing, speed up refers to how much a parallel algorithm is faster than a corresponding sequential algorithm. It is defined by the following formula: Sp = Where, P is the number of processors T1 is the execution time of sequential algorithm T p is the execution time of the parallel algorithm with processors. Amdahl’s speedup performance law is based on fixed workload or fixed problem size. Amdahl’s law, named after computer architect Gene Amdahl, is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speed up using multiple processors For example, consider a task having two independent parts A and B. B takes roughly 25 % of the time of the whole computation. With more computations, this part can be made 5 times faster, but this only reduces the time for the whole computation by a little. In contrast, one may need to perform less work to make part A, twice as fast. This will in turn make the computation much faster than by optimizing part B, even though B got a bigger speed-up (5x versus 2x) (Yuan 1996). Figure 2.1 Amdahl’s Law Amdahl’s law means that it is the algorithm that decides the speed up and not the number of processors. In general, the goal in large scale computation is to get as much work done as possible in the shortest possible time in your budget. The computational workload is often fixed with a fixed problem size. As the number of processors increases in a parallel computer, the fixed load is distributed to more processors for parallel execution. The primary goal is a minimal turnaround. Let Q (n) be the lumped sum of all system overheads on an n-processor system. Therefore, Sn = ; where, Q (n) dependence on application and machine, and wi is the amount of work executed by a program for Degree of Parallelism (DOP) = i. For special case where the computer operates either in sequential mode (DOP = 1) or in perfectly parallel mode (DOP = n), Amdahl derived a fixed load speed up. However, in special case wi=0 if i ≠ 1 or i ≠ n, in the parallelism profile, the speedup is: Sn =. Thus, Amdahl’s law implies that the sequential portion of the program wi remain the same irrespective of the machine size n. However, the parallel portion is evenly executed by n processors resulting in a reduced time. (a) Fixed Workload (b) Decreasing Execution time (c) Speed up with a fixed load Figure 2.1 Fixed load speedup Model and Amdahl’s Law. However, the major problem in Amdahl’s law is that the fixed load prevents scalability in performance, and the entire performance cannot go higher than. This ‘α’ has been called the sequential bottleneck in a program (Yuan 1996). The fact that different inputs travel through different paths in a program, different step counts are resulted. The speedup and the number of processors are related by three possible ways; namely, sub-linear speedup or Speedup < P, linear speedup or Speedup = P, and super-linear speedup or Speedup > P. However, as practical parallel program requires its final answers to be combined in a single program, practically the serial percentage calculated in Amdahl’s Law is never zero. Therefore, theoretically, super-linear speedups are not possible. But ideally, two factors can be used to produce super-linear speedups: 1.The speedup calculation is done using limited resource for serial execution. 2. Large number of calculation steps can be eliminated by using parallel implementation which gives the same output as that of serial implementation (Yuan 1996). 2. Concurrency Concurrency has been tagged as the “next big thing” in the mainstream software development. The parallel future is already seen taking shape with every machine working as a parallel machine, which will require major advancements and changes in the way software is presently developed. Also, there are enormous hardware imperatives behind this drastic shift required in software design and computer architecture, from uni-processors to multiprocessors. The fundamental shift in computing is the hardware changes. It is observed that for the past thirty years, improvements in processor implementation and semiconductor fabrication has steadily increased the speed at which computers implemented current sequential programs. However, the architectural modifications in multi-core processors tend to benefit only existing concurrent applications and hence does not apply for most of the existing mainstream software (Herb & James 2005). The rapid progress made in hardware technology has increased the economical feasibility of building new generation computers. From the future viewpoint, present desktop applications will not run as fast as they run now, they may run slower on new chips. This is because individual processors will become simpler and execute at comparatively lower clock speeds to reduce the overall power usage on heavily architected multi-core processors. Computers will be required to implement maximum concurrency between running programs to tackle the hardware issue of increasing performance, since computers will become increasingly capable (Herb & James 2005). Apart from the said facts, another important reason for achieving concurrency is to improvise responsiveness in distributed computing, by executing work asynchronously rather than synchronously. For instance, programmers require today’s applications to shift their work off the GUI thread to be able to reuse the screen, while at the same time allowing some computation to simultaneously run in the background. But attaining concurrency is a tedious task, since today’s languages and tools are inadequate to effectively transform applications into parallel programs. But the bottom line is, multiprocessor machines and distributed computing systems are the future and to obtain maximum efficiency from these systems, they must be programmed well. Future trend of distributed computing requires developing machine-independent application programs and programming environments. These programs can be ported to many computers with minimum conversion costs. High-level programming language abstractions, to enable the existing applications to become concurrent incrementally; is also needed. The programming model must be created in a way to easily understand concurrency and its need, during both initial development as well as maintenance stages. Parallelism is a key; and concurrency revolution is a prime factor for software revolution. Building multi-core processors is not the difficulty, but it is programming the processors in a way that benefits mainstream applications from the increasingly growing CPU performance, and this requires finding methods better than low level tools of thread and synchronization that are basic building blocks of today’s parallel programs (Herb & James 2005). 3. The Dining Philosophers’ Problem A classic problem of synchronization is the Dining Philosopher’s problem. Its main aim is to avoid deadlock between a set of processors that either compete for system resources or communicate each other. It is termed as a classic Operating System driven problem that is best described and understood in non-OS terms: Consider N philosophers seated around a circular table having a meal of spaghetti and discussing philosophy. The issue is that each philosopher requires 2 forks to eat spaghettis while there are only N forks, one in between each pair of philosophers, thus creating the ‘dining philosophers’ problem. The problem’s solution is found by designing an efficient algorithm, to be followed by all philosophers in question, that ensures that none of them starves till the time each philosopher eventually stops eating and in a way to make sure that maximum number of philosophers can eat at a time, simultaneously. The reason behind describing the problem in such a non-technical fashion is to give an exact idea about how analogous situations in computers take place, which needs thinking abstractly through the practical philosophers’ problem. The problem is an easier way to understand how resources are shared by applications and processes in computers in shared memory systems and distributed networks, thereby tackling the issue of concurrency. However, considering the following simple but deadlock prone solution to Dining Philosopher’s problem, illustratively, void philosopher (){ while (1){ Sleep(); get_left_fork(); get_right_fork(); eat(); put_left_fork(); put_right_fork(); } } If each philosopher is picking up the left fork at the same time, none of them get to eat ever (Swati & Ruchita 2009). Consecutively, other sub optimal alternative solutions for this problem may include: 1. Pick up the left fork. If the right fork is not available for a given time, put the left fork down, wait and try again. However, a big problem is encountered if all philosophers wait at the same time- giving the same failure mode as before but repeated. Even if every philosopher waits at a different random time, an unfortunate philosopher may starve (in the technical sense). 2. Secondly, a possibility requires all the philosophers to execute a binary semaphore before choosing to pick up any fork. This successfully guarantees that no philosopher will starve, but dramatically limits parallelism. An example that gets maximum concurrency: #define N 5 //number of philosophers # define RIGHT (i) (((i)+1)%N) #define LEFT (i) (((i)==N) ? 0 : (i) +1) typedef enum {THINKING, HUNGRY, EATING} phil_state; phil_state state[N]; semaphore mutex=1; semaphore s[N]; //one per philosopher, all 0 void test (int i) { if (state[i]==HUNGRY && state[LEFT(i)]!=EATING && state[RIGHT(i)]!=EATING ) {state[i] = EATING; V(s[i]); } } void get_forks (int i){ P(mutex); state [i] = HUNGRY; test (i); V (mutex); P(s[i]); } void put_forks (int i){ P(mutex); state[i] = THINKING; test(LEFT(i)); test(RIGHT(i)); V(mutex); } void philosopher (int process) { while(1) { think(); get_forks(process); eat(); put_forks(process); } } (Swati & Ruchita 2009). The magic is in the ‘test’ routine. When a philosopher is hungry it uses the test function to try to eat. If test fails, it waits on a semaphore until some other process sets its state to EATING. Whenever a philosopher puts down the forks, it invokes test in its neighbors. It must be noted that test does nothing if the process is not hungry, and that mutual exclusion prevents race conditions. 4. Logical Clocks a. Processes are A, B and C. Their respective clocks are CA, CB and CC. The process A sends a message to B and this event is denoted by ‘a’, similarly the event of receipt of this event is denoted by ‘b’. The event of sending a message from B to C is denoted by ‘c’, and likewise receipt of this message by process C is denoted by‘d’. Hence, timestamps of the events in their respective processes is given by CA (a), CB (b), CB (c), and CC (d). At clock 10 A sends message to B. After 4 units, B receives this message. Thus at clock 4, B’s receiving event is b. It waits for one unit. Then it sends message to C. At clock 5, B send message to C and the sending event is c. It reaches C after 2 units i.e. at clock 7, thus the receipt event at C is given by d (Rob 2002). The conditions by the logical clock system is satisfied; ‘a’ occurs before b, CA (a) < CB (b) for two separate processes A and B. Similarly, CC (c) < CD (d). Also, as ‘a’ is the message sent by A and ‘b’ is the receipt of the same message by B, CA (a) < CB (b) (Rob 2002). Clocks for each event is incremented such that Ci = Ci + s (s > 0), where,‘s’ is the estimated transmission time for messages between two processes. If ‘a’ is the event for sending a message from process A to process B, then the receiver process B will set its clock to the maximum number between its own current clock and the sender clock at the time of that event, i.e. Cj = max (Cj, tm + s) (s > 0) (Charles 2000). As we know that logical clocks are values assigned to events that provide the information about the sequence in which events take place, the goal is to assign an integer value to each event ‘e’ in an execution such that if ab then L (a) < L (b). The value of L is the logical clock. The clock value Ci (a) is the timestamp of event ‘a’ in whatever processes it is occurring. Therefore, in this problem, processes B, C and A have timestamps 0, 5 and 10 respectively, that means that the ordering of events starts from B to C to A (Werner 2009). If a  b then clock (a) < clock (b) when ‘a’ and ‘b’ are two events on different processes. If two events ‘a’ and ‘b’ that do not exchange messages, then neither of the conditions ab or ba, are true. These events are said to be concurrent. Here, a || c, b || d and a || d (Michael 2005). The timestamps for the messages is given as below: In each of the above case, it is observed that the receiver process is slower than the sender process. Hence the receiver needs to be updated by the following formula: TA = timestamp of sending message from process A = Clock of A = 10 Clock of receiver process B, receiving message as event ‘b’= CB (b) = max {CB (b), TA + estimates message transmission time (d)} …………... (1) = max {0, 10 + 4} = 14. TB = timestamp of sending message from process B = Clock of B = 14 ….…from equation (1) Clock of receiver process C, receiving message as event d = CD (d) = max {CC (d), TB + estimates message transmission time (d)} = max {5, 14 + 2} = 16. (Watson 2003). Table 5.1 Lamport timestamp Event Lamport Timestamp a 10 b 14 c 14 d 16 Figure 5.1 Logical Clock b. Two events are said to be concurrent of they are not causally related to each other, i.e. when their relative order of occurrences is not taken into account they are said to be concurrent. If event a does not occur before event b, and b does not occur before a, they a and b are concurrent, denoted by a | | b. This is also equivalent to b || a. Their order of occurrences is denoted by the ‘happened before’ relationship () (Rob 2002). This relationship captures causal dependencies between two or more events. This causal relation () is defined as follows: 1. If a and b are events in the same process and a occurs before b, then a  b. 2. If a is the ‘sending of message’ event from one process and b is the ‘receiving that message’ event by another process, then a  b. 3. The relation also has the ‘transitivity’ property, that is if ab and bc, then ac (Arvind 2003). Furthermore, a causally affects b if a  b. Therefore, when the events do not have a causal relationship between them, they are concurrent to each other (Charles 2000). If a  b then clock (a) < clock (b) when ‘a’ and ‘b’ are two events on different processes. If two events ‘a’ and ‘b’ that do not exchange messages, then neither of the conditions ab or ba, are true (Leslie 1978) (Michael 2005). These events are said to be concurrent. Two events are concurrent if there is no communication between them. However, Lamport introduced the concept of logical clocks as values assigned to every event within a single process or between two processes that gives information about the ordering in which these events happen. Every process Pi has its own clock say C i. This logical clock C i. can be thought of as a function that assigns a value C © to an event ‘a’. This value C © (a) is the timestamp of the event a in P i. Additionally, these numbers are nothing but counters which increment every time an event occurs (Watson 2003). Therefore to summarize, implementation rules for Lamport logical clock are: For each event, clock C © is incremented such that Ci = Ci + d (d > 0). Secondly, if one process sends a message to another process through the event ‘a’, then the receiver will set its clock to the maximum value between its current clock and the sender’s clock; that is Cj = max {Cj, tm + d1}, where d1 is the estimated transmission time for each event (Watson 2003). 5. Bully Algorithm Leader election is a significant problem and a critical issue in distributed computing. The bully algorithm was developed by Garcia-Molina in 1982. The Bully algorithm is stated as classic solution for electing a leader in synchronous systems with crash failures. It is an optimal method, in distributed computing, for dynamically selecting a coordinator by the process ID number. Whenever a process notice that the coordinator is no longer able to respond to requests, it dynamically initiates an election. The election is carried out by a process P in the following way: 1. Initially, process P sends an election request message to all processes with higher IDs and awaits OK message. 2. If no process responds to P’s message, P wins the election and becomes the new coordinator; sending I won messages to lower-Id processes. 3. However, if any of the higher ID processes answers P with OK message, it takes over and P’s job is done; it awaits I won message (Scribd 2008). Any process can initiate an election if it has just recovered from failure or if the coordinator is failed. At any time a process can receive an election message from one of its colleagues with lower IDs. Upon arrival of such a message, the corresponding receiver sends back an OK message to the sender to alert it that it is running and ready to take over. The receiver of this OK message holds an election, unless it is already holding one. The election process continues this way and eventually all processes give up except for one, and that process is the newly elected coordinator (leader). This new coordinator announces all other processes about its victory by sending across I won messages to them stating that it is the new leader. Furthermore, if any process that was previously down but has come back up, it is eligible to hold an election. If that process happens to possess the highest ID number among all other processes, it will easily win the election and will take over the leader’s job. The algorithm name is so kept, considering the reality that the biggest guy in village always wins, hence the name “Bully’s Algorithm” (Scribd 2008). a. Example Consider eight processes numbered from 0 to 7, as shown in the diagram below. Initially process 7 was the coordinator but it has now crashed. This event is first noticed by Process 4 so it sends multiple election messages to all other processes with higher ID, namely, processes 5, 6 and 7, as shown in (a). Now, processes 5 and 6 respond to 4 with OK messages, as shown in (b). Immediately after receiving one of these OK responses, process 4 knows that its job is done and waits back to see who the final leader will be. Further, process 5 and 6 hold respective elections with each one sending messages to only those processes with higher ID than itself, as shown in figure (c). In figure (d), process 6 notifies 5 that it is ready to take over, knowing that process 7 is already dead, and claims itself as the winner by sending COORDINATOR messages to all other running processes. When process 4 receives this message, it continues to operate the same way at the time it discovered the crash of 7, but this time using process 6 as the new coordinator. In this way, crash of process 7 is handled and work is resumed (Scribd 2008). Figure 6.1 Bully Algorithm Example b. Performance of the Bully Algorithm Best case scenario: If the process with the second highest ID number notices the crash of coordinator and eventually elects itself, then N-2 election messages are sent out and turnaround time is one message transmission time. Worst case scenario: When the process with the smallest ID detects a failure, N-1 processes begin elections simultaneously, with each one sending messages to higher ID processes. Also the message overhead is O (N2) and the turnaround time is as high as five message transmission times (Scribd 2008). 6. Conclusion In distributed and parallel computing, interconnection between processes plays a significant role. The processors communicate and co-operate in solving a problem or they may run independently; often under the control of another processor which distributes work to and collects results form them. Amdahl’s law is an essential speedup performance law to increase operational efficiency of parallel systems. Concurrency enables processors of distributed systems to effectively share memory and resources; and to better understand its execution, Dining philosophers’ algorithm Bully’s algorithm and Logical clocks are implemented. The technology of parallel processing and distributed computing is the result of four decades of research and industrial advancements in hardware and desktop technology. These rapid advancements and progress, seen in hardware, has significantly increased the economic feasibility of building new generation computers. However, the major barrier preventing parallel processing is on the software and application side. It is still very difficult and painful to program parallel and vector computers. Therefore, to conclude, we need to set trends and strive for major growth and progress in the software technology to create even more user friendly environment and user-orientated applications, for high power computers. List of Terminology 1. Parallel Computing System: A computer with more than one processor for parallel processing. 2. Distributed system: A set of asynchronous computing devices connected by a network. 3. Processing elements (PE): Processors in parallel computing. 4. Amdahl’s Law: A speedup performance law based on fixed workload or fixed problem size; used to predict the theoretical maximum speedup using multiple processors. 5. Concurrency: Simultaneous execution of two or more processes in a single processor or multiple processors running at the same time in a distributed network, using shared memory. 6. Dining philosophers’ problem: A classic problem of synchronization that aims at avoiding deadlock situation between a set of processors that either compete for system resources or communicate each other. 7. Deadlock: A condition wherein two or more processes try to gain access to the same resource at the same time, each waiting for the other to release the resource. 8. Event: A situation which triggers a communication between two processes. 9. Concurrent events: If two events ‘a’ and ‘b’ do not pass any messages to each other, that is, no event precedes the other in occurrence, they are said to be concurrent. 10. Logical clock: Numerical values assigned to events depicting the order in which they occur. 11. Timestamp: Moment at which an event takes place in a running process. 12. Bully Algorithm: Leader election algorithm to elect a coordinator among processes with highest process ID. References 1. Scribd 2008. Bully Algorithm. Creativecommons, viewed 1 October, 2009, . 2. Herb Sutter & James Larus 2005. Software and the Concurrency Revolution. Association of Computing Machinery, viewed 2 October, 2009, . 3. Yuan Shi 1996. Reevaluating Amdahl's Law and Gustafson's Law. Computer and Information Sciences Department, viewed 2 October, 2009, . 4. Rob R Hoogerwoord 2002. Leslie Lamport’s Logical Clocks: a tutorial, Creativecommons, viewed 1 October, 2009, . 5. Michael Tobis 2005. Logical clocks. Loyola University, Chicago, viewed 1 October, 2009, . 6. Arvind Krishnamurthy 2003. Logical Clocks. Yale University, viewed 30 September, 2009, . 7. Leslie Lamport 1978. Logical Clocks. Communications of the ACM, viewed 30 September, 2009, . 8. Werner Nutt. 2009. Distributed Systems. Viewed 30, September 2009, . 9. Charles Chan 2000. Implementing Fault-Tolerant Services using state machine Approach: A tutorial. ACM Comput, viewed 1 October, 2009, . 10. D G Watson 2003. Lamport's Logical Clocks. The Everything Development Company, viewed 2 October, 2009, . 11. Swati Jain & Ruchita Rathi (eds) 2009, Operating Systems. 3rd edn, Tech-Max, Pune, India. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(Distributed and Parallel Systems Research Paper, n.d.)
Distributed and Parallel Systems Research Paper. Retrieved from https://studentshare.org/information-technology/1727479-term-paper
(Distributed and Parallel Systems Research Paper)
Distributed and Parallel Systems Research Paper. https://studentshare.org/information-technology/1727479-term-paper.
“Distributed and Parallel Systems Research Paper”, n.d. https://studentshare.org/information-technology/1727479-term-paper.
  • Cited: 0 times

CHECK THESE SAMPLES OF Distributed and Parallel Systems

Information Technology - Questions to be answered

Neural networks belong to a more general class of processing systems, parallel distributed processors, and neurocomputing is a special case of Parallel Distributed Processing, or PDP, whereby processing is done in parallel by a number of independent processors and control is distributed over all processes.... hellip; All the models for neural networks can be derived as special cases of PDP systems, from simple linear models to thermodynamic models.... This approach is better suited than traditional computing for pattern matching tasks such as visual recognition and language understanding. Expert systems (in general all symbolic systems) and neural networks are two "rival" approaches to Artificial Intelligence, both having different application areas within this scope....
4 Pages (1000 words) Essay

Distributed Systems Assessment

These modules can be combined by both sequential and parallel composition.... Communication between components in a distributed system may be done using "Message Passing", "Remote Procedure Call" or "Remote Object Invocation".... It is very useful in distributed computing as the procedure can be executed virtually at any location....
5 Pages (1250 words) Essay

International Marketing

Given that the parallel products gets sold at a premium price and at the same time also competes with genuine products in the same market , this could act as a… Still, there is a possibility of containing the grey market, through the implementation of both formal and informal process to the While marketing goods and services to the various countries, there are instances whereby adaptation may be necessary, like when we want the new market to become acclimatized with our new products....
4 Pages (1000 words) Essay

Storage of User Generated Data Using Distributed Backup Method

The volumes of big data are considerably large, such that many organizations find it difficult to process, store and access the data they need using the traditional databases and systems of storage (BVT, n.... Conventional systems of storage typically require revision of the systems or tech refreshes every three years (sometimes four) so that the company can keep up with new requirements and growth.... In the event that there is a failure in the storage systems within an organization,...
5 Pages (1250 words) Essay

Einstein@home: Harnessing the power of voluntary distributed computing

The former involves computing power concentrated in one massively parallel processing facility, where the hardware and… Distributed computing, on the other hand, relies on a dispersed computational model to handle massive data wherein hardware is geographically scattered, may involve multiple softwares and only 1.... The former involves computing power concentrated in one massively parallel processing facility, where the hardware and software are built to power a centralized machine capable of handling massive load....
1 Pages (250 words) Case Study

Processing Algorithm Developed by Google for Big Data

Bigtable is a distributed storage system for managing structured data.... The aim of the paper “Processing Algorithm Developed by Google for Big Data” is to analyze the storage system and processing algorithm developed by Google for Big Data, which is known as the Bigtable....
4 Pages (1000 words) Essay

The Connection Between the Internet and Distributed Application Services

Fundamentally, clustering is a significant practice in computer technology since it is a sure way of increasing or designing highly performing computer systems.... lustering also intends to create fault-tolerant systems because each computer with the cluster is able to operate alone without assistance from other computers within the cluster (Englander, 2003).... The processing problem including parallel processing units can be solved by breaking the problem into subtasks and then distributing them to different or parallel processing units among the nodes thereby solving the problem in parallel (Englander, 2003)....
5 Pages (1250 words) Essay

Pair and Parallel Programming

This paper is going to highlight the procedure of pair and parallel programming.... New approaches now have programmers working in parallel on software development.... The paper tells that there are various benefits of having system developers working together in parallel.... When we talk of parallel working in software development, the idea is that the developers work from the same computers or servers.... The first advantage of parallel working is that developers are given an opportunity to benefit from the studies their peers had conducted....
8 Pages (2000 words) Coursework
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us