Introduction to Distributed Systems
Distributed computing is a field of computer science that studies distributed systems.
A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages.
The components interact with each other in order to achieve a common goal.
According to Tanenbaum’s definition:
A distributed system is a collection of independent computers that appears to its users as a single coherent system.
Three significant characteristics of distributed systems are:
- concurrency of components,
- lack of a global clock,
- and independent failure of components.
Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications.
- There are several autonomous computational entities, each of which has its own local memory.
- The entities communicate with each other by message passing.
Various hardware and software architectures are used for distributed computing.
Distributed programming typically falls into one of several basic architectures: client–server, three-tier, n-tier, or peer-to-peer; or categories: loose coupling, or tight coupling.
- Client–server: architectures where smart clients contact the server for data then format and display it to the users. Input at the client is committed back to the server when it represents a permanent change.
- Three-tier: architectures that move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are three-tier.
- n-tier: architectures that refer typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
- Peer-to-peer: architectures where there is no special machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and as servers.
Shared Memory Architecture:
In the shared-memory architecture, the entire memory, i.e., main memory and disks, is shared by all processors.
A special, fast interconnection network (e.g., a high-speed bus or a cross-bar switch) allows any processor to access any part of the memory in parallel.
All processors are under the control of a single operating system which makes it easy to deal with load balancing.
It is also very efficient since processors can communicate via the main memory.
Distributed Memory Architecture