A distributed system is a collection of independent computers that appears to its users as a single coherent system. Distributed Computing is a model in which components of a software system are shared among multiple computers to improve performance and efficiency.
All the computers are tied together in a network either a Local Area Network (LAN) or Wide Area Network (WAN), communicating with each other so that different portions of a Distributed application run on different computers from any geographical location. A computer program that runs within a distributed system is called a distributed program. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other via message passing.
Distributed systems are characterized by their structure: a typical distributed system will consist of some large number of interacting devices that each run their own programs but that are affected by receiving messages or observing shared-memory updates or the states of other devices. Examples of distributed systems range from simple systems in which a single client talks to a single server to huge amorphous networks like the Internet as a whole.
Distributed Computing Architecture
Distributed Computing architecture is characterized by various hardware and software level architecture. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.
Distributed programming typically falls into one of several basic architectures: client-server, three-tier, n-tier, or peer-to-peer; or categories: loose coupling, or tight coupling.
Client-server: architectures where smart clients contact the server for data then format and display it to the users. Input at the client is committed back to the server when it represents a permanent change.
Three-tier: architectures that move the client intelligence to a middle-tier so that stateless clients can be used. This simplifies application deployment. Most web applications are three-tier.
n-tier: architectures that refer typically to web applications that further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
Peer-to-peer: architectures where there are no special machines that provide a service or manage the network resources. Instead, all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and as servers. Examples of this architecture include BitTorrent and the bitcoin network.
Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a master/slave relationship. Alternatively, a “database-centric” architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database. Database-centric architecture, in particular, provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database
Advantages of Distributed Computing
- Reliability, high fault tolerance: A system crash on one server does not affect other servers.
- Scalability: In distributed computing systems you can add more machines as needed.
- Flexibility: It makes it easy to install, implement and debug new services.
- Fast calculation speed: A distributed computer system can have the computing power of multiple computers, making it faster than other systems.
- Openness: Since it is an open system, it can be accessed both locally and remotely.
- High performance: Compared to centralized computer network clusters, it can provide higher performance and better cost performance.
Disadvantages of Distributed Computing
- Difficult troubleshooting: Troubleshooting and diagnostics are more difficult due to distribution across multiple servers.
- Less software support: Less software support is a major drawback of distributed computer systems.
- High network infrastructure costs: Network basic setup issues, including transmission, high load, and loss of information.
- Security issues: The characteristics of open systems make data security and sharing risks in distributed computer systems.
Examples of Distributed Computing
Examples of distributed systems and applications of distributed computing include the following:
1: Telecommunication networks:
- telephone networks and cellular networks
- computer networks such as the Internet
- wireless sensor networks
- routing algorithms
2: Network Applications:
- World Wide Web and peer-to-peer networks
- massively multiplayer online games and virtual reality communities
- distributed databases and distributed database management systems
- network file systems
- distributed cache such as burst buffers
- distributed information processing systems such as banking systems and airline reservation systems
3: Real-Time Process Control:
- aircraft control systems
- industrial control systems
4: Parallel Computation:
- scientific computing, including cluster computing, grid computing, cloud computing, and various volunteer computing projects
- distributed rendering in computer graphics
Distributed Computing Projects
For Projects visit List of Distributed Computing Projects.
Distributed Computing Software
Developed by the OSF (Open Software Foundation) DCE is a software technology for deploying and managing data exchange and computing in a distributed system. Used typically in large computing network systems, DCE provides underlying concepts and some of its major users include Microsoft (DCOM, ODBC) and Enrica.
Distributed computing helps improve the performance of large-scale projects by combining the power of multiple machines. It’s much more scalable and allows users to add computers according to growing workload demands. Although distributed computing has its own disadvantages, it offers unmatched scalability, better overall performance and more reliability, which makes it a better solution for businesses dealing with high workloads and big data.