Remote Terminals With Linux – An Introduction


One of the most interesting features of Linux is its versatility. Being able to make complicated configurations out-of-the box. You do not need to buy the ultimate hyper business version  to have the ability to set up  a complex client / server system with dumb terminals and a remote application server.

Creating a client / server network is relatively easy, since the multi-task / multi-user architecture is a native feature of Linux.

But in order to understand this process, it is necessary to work with some theory, where we will see what is a client / server network   with remote dumb terminals, what are its advantages, in which cases it can be used and in what ways it can be implemented on Linux.

A little history

Remote terminals, formerly known as dumb terminals, are in the IT arena for a long time. They were called dumb terminals, because little or no processing was done on the client side. They simply showed the output of the server and communicate user input via keyboard and / or mouse back to the server. The main system was centralized with all data and applications being stored and managed by a single server or a cluster of servers.

The central processing concept has been widely adopted by several companies during the 70’s, due to advantages such as fault tolerance, central administration and security.

However, as the cost of PCs dropped a lot  in 80’s , the decentralization of the system with individual PCs gained popularity. In addition, PCs introduced several features that dumb terminals did not have , such as a graphical user interface and environment customization by the user.

Decentralization, on the other hand, made the management, maintenance and system upgrade an arduous task, as it should be run locally on each machine.

Since the late 80’s to mid 90’s, a hybrid of both systems, known as client / server began to dominate computer networking.

The server handles the processing of information in a centralized database, while the client PC runs applications and user interface. Data can be easily preserved and performance was better than sharing files on a common PC.

The problem of maintenance, management and updating (both applications and the operating system) on individual PC’s remains one of the main drawbacks of computing with fat/robust clients, since  the most critical parts happen on the client side.

A solution turning the eyes to the past

To solve various problems of distributed computing, with the model of robust client, the lean/thin client concept was created.

The term thin client was coined in 1993 by Tim Negris, VP of Server Marketing at Oracle Corp., while working with the company’s founder Larry Ellison on the release of Oracle 7.

At the time, Oracle wished to differentiate their server-oriented software from Microsoft’s desktop-oriented products. The term of Negris was then popularized by its frequent use in speeches and interviews by Larry Ellison about Oracle’s products. It is from this time the famous internet terminal, the US$500 computer, which Oracle then started to advertise and advocate.

The term stuck for several reasons. The earlier term “graphical terminal” was chosen to contrast such terminals with text-based terminals, and thus puts the emphasis on graphics. The term was also not well-established among IT professionals, most of whom had been working on fat-client systems. It also conveys better the fundamental hardware difference: thin clients can be designed with much more modest hardware, because they perform much more modest operations.

The Hardware Options

Several companies entered the segment of thin clients  offering hardware solutions for the implementation of networks with thin clients:

  • ChipPC
  • Fujitsu
  • HP
  • Igel
  • LISCON
  • OpenThinClient
  • Sun Microsystems
  • Wyse
  • Thinvent

Enter Linux

The Icon of “X”, The graphical environment of Linux

Linux, by its very nature, was designed with the paradigm of the network, the client terminal and a server providing services and capacity over the network, inherited from its father figure, Unix.

And, from Unix, Linux has inherited the paradigm of graphical environment X.

The graphical environment X began to be developed in 1984 by Bob Scheifler and Jim Gettys.

There was a joint effort to develop a graphical environment for Unix and several companies were interested: IBM, DEC, SUN and HP, to name a few. On the academic side, the universities involved were: MIT, Carnegie Mellon University, Stanford University and Brown University.

The development of the graphical environment of Unix, X, would create a client server paradigm for the graphical interface that works as follows:

X uses a client–server model: an X server communicates with various client programs. The server accepts requests for graphical output (windows) and sends back user input (from keyboard, mouse, or touchscreen). The server may function as:

  • an application displaying to a window of another display system
  • a system program controlling the video output of a PC
  • a dedicated piece of hardware.

And, most important of this architecture is that the X server environment and clients can be on separate machines, communicating through the X protocol even from a local network.

A graphic representation of How the X protocol works

As the discussion about the X graphical environment and its history is a long and complex subject, we will not address it here now.

The interesting aspect of the architecture of the X graphical environment, which was later inherited by Linux, is that with it, it is very easy to create networks of computers, dumb terminals connected to a central server.
What interests us here is the protocol XDMPC, which was developed for the X11R4 version in 1989 and introduced the protocol as it is used today in  Linux and other *NIX.

Advantages

Reducing the cost of network ownership. Understand the cost of ownership as the sum of the purchase price of the computer, maintenance, licenses for the use of software, the power consumption etc..;

  • Remote administration of each terminal;
  • Flexibility. If there is any hardware failure of the terminal, just ask the user to start a new graphical session from another. So there will be no loss of information because they are centralized on the server;
  • High scalability. To increase the number of terminals in the network, just increase the processing capacity and the amount of RAM in the server;
  • You can customize a graphical session for each user releasing or restricting access to certain features or applications server;
  • The configuration and generation of the operating system which will be used on the terminals can be done easily, respecting each machine capabilities and limitations;
  • Allows the reuse of obsolete computers to be used as terminals, reducing network costs and reducing the environmental impact of such equipment.

Disadvantages

  • High data traffic generated by the communication between the server and the network terminals;
  • The server becomes the critical point of the network, ie if it stops working, all users are unable to work;
  • The server is more vulnerable to attack if an attacker has access to the XDMCP network .

Where the network server / client XDMCP can be used:

Client / server networks XDMCP can be deployed successfully in: reading rooms, libraries, schools, universities, internet access centers, Cyber cafes, offices, in short, in all situations where the data processing, input and output, can be done in batches and synchronously.

Where the XDMCP network can not be used:

Multimedia processing, asynchronous data processing, real-time processing and games. In short, video editing, sound, 3D modeling, gaming, do not have good performance on a XDMCP network.

In coming articles, I will detail how to implement a XDMCP network  with Linux, using outdated computers that no longer fit for everyday use.

Until then!

Leave a Reply

Your email address will not be published. Required fields are marked *