Научный журнал
European Journal of Natural History
ISSN 2073-4972
ИФ РИНЦ = 0,301

IMPLEMENTATION OF THIN CLIENT TECHNOLOGY BASED ON A FAULT-TOLERANT NLB CLUSTER FOR MONITORING PROCESSES OF POWER GENERATION ENTERPRISE

Balanov A.A. 1 Zhidkov D.A. 1 Kuligina N.O. 1 Tarlakovskaya E.A. 1
1 Nizhny Novgorod State Technical University n. a. R.E. Alekseev
The paper provides a detailed description of the process of implementing thin client technology based on a fault-tolerant NLB cluster on existing computing resources of a power enterprise. The authors describe the order of processes for the integration of this technology, starting from the selection of equipment and the preparation of the enterprise’s local area network and ending with the configuration of servers and the installation of software. It is necessary to separately note the relevance of this system for fault tolerance at an energy enterprise. It is worth paying attention that such systems are not very developed at such enterprises, but they have great implementation prospects. Moreover, increased fault tolerance is also worth paying attention to. This system allows operational monitoring in case of any emergency situations, and it also duplicates the existing enterprise systems. The big advantage is that the implementation of this system is carried out at the existing computing facilities of the enterprise and it does not require additional purchase of equipment or software. The implementation can be carried out by the employees of the enterprise, which does not require the involvement of additional labor and contractors. This once again confirms the relevance and expediency of this system.
operating system
microsoft windows
updates
update methods
acronis
sccm
mdt
wsus
software
1. Thin client URL: https://en.wikipedia.org/wiki/Thin_clien (date of the application: 23.12.2021).
2. STP URL: https://en.wikipedia.org/wiki/Spanning_Tree_Protocol (date of the application: 23.12.2021).
3. Patterson D.A. A Case for Redundant Arrays of Inexpensive Disks (RAID): арх. 18 september. 2006. URL: Patterson88.pdf (mit.edu) (date of the application: 23.12.2021).
4. IBM developerWorks Russia Hidayatula Sheikh Intermediate-level high-availability Software in Linux, Part 1: Heartbeat and Apache Web Server. URL: https://www.interface.ru/home.asp?artId=2620 (date of the application: 23.12.2021).
5. Maksimov А. Deploying thin clients Thinstaition c with connection to Windows server 2012 R2 Remote Desktop Services 15.02.2017. URL: https://blog.it-kb.ru/2017/02/15/build-an-isolated-network-of-thin-linux-clients-with-thinstation-and-freerdp-connecting-to-windows-server-2012-r2-remote-desktop-services-with-auto-logon-and-operating-during-working-hours/ (date of the application: 23.12.2021).
6. MSDN Library Microsoft URL: About User Profiles (Windows). Microsoft Docs (date of the application: 23.12.2021).
7. Sergey Cherepenin, Igor Chubin PXE. URL: http://xgu.ru/wiki/PXE (date of the application: 23.12.2021).

In the modern world it is difficult to imagine any industry without information technologies that automate or simplify routine production processes. This has not bypassed the electric power industry in Russia. Software products, automation mechanisms and other IT-solutions are unique for each enterprise in this industry, since each enterprise has its own characteristics when creating and operating IT infrastructure. The paper provides a report on the project “Implementation of thin client technology for monitoring the processes of an energy generating enterprise”.

The goal of this project is to create a thin client infrastructure for operational monitoring of processes of a power generating enterprise for shift personnel using existing computing capacities.

Materials and methods of research

A thin client in computer technology is a computer or client program in networks with a client-server or terminal architecture that transfers all or most of the information processing tasks to the server [1].

Shift personnel – shift supervisors of workshops, machinists-linemen, on-duty electricians and laboratory assistants – need to see current indications about the operation of various mechanisms and installations of the thermal power plant, as well as fill out reports on incidents per shift. To deploy a full-fledged automated workplace (AW) doesn’t make sense for such reasons as inexpediency of using the AW capacity, lack of resiliency and long preparation of a standard PC, the inability to fast replacement of AW and lack of budget for purchasing additional computing equipment.

Similar ready-made solutions are offered by HP, DEPO and Aquarius. But because of the high cost, it is impractical to implement them.

The first stage of the project implementation is the preparation of the enterprise LAN segment. MOXA EDS-G512E-4GSFP and CISCO Catalyst 2950 and 2960 industrial switches are already installed in the proposed installation sites. For the rest of the enterprise’s existing LAN capacity will be used. The Department of Communications and Telecommunications Equipment creates a separate subnet for the implementation of this project, and the necessary configuration of telecommunications equipment is made. The Cisco ASA5508-K9 firewall is used to separate and filter traffic. When connecting switches, the Spanning Tree Protocol is used to increase fault tolerance. The main goal of STP is to eliminate loops in the topology of an arbitrary Ethernet network that has one or more network bridges connected by redundant connections. STP solves this problem by automatically blocking connections that are currently redundant for full connectivity of switches [2].

A detailed network diagram is provided in Appendix 1. After preparing the network and telecommunications equipment, you need to go to the server part of the project. After centralizing the IT infrastructure, the enterprise still has two free HP ProLiant DL 380 G5 servers with similar characteristics. Server specifications: 2 dual-core Intel® Xeon® 5130 processors (2.0 GHz, 65 Watts, FSB 1333 bus); 8 GB of RAM; Smart Array P400/256 MB controller (RAID 0/1/1+0/5), 4 HP 72GB 3G SAS 10K SFF DP HDD hard drives; two power supplies. These servers are suitable for use in the project because they have sufficient computing power, the necessary technologies to improve fault tolerance, shown stable performance during previous operation and consumables and spare parts for repair work in case of failure in the warehouse of the enterprise.

For increased fault tolerance, hard disks are combined in a Redundant Array of Independent Disks (RAID) on each of the servers. RAID is a data virtualization technology for combining multiple physical disk devices into a logical module to improve fault tolerance and performance [3]. The rational solution was to choose RAID 5 directly. Since this technology provides the highest fault tolerance with minimal loss of hard disk space. Accordingly, the total amount of memory on each of the servers will be 216 GB. Of these, 100 GB will be allocated for the OS system disk. The remaining volume will be used as local storage.

Windows Server 2008 R2 Standard with Russian localization is installed on each of the servers with all the necessary drivers and OS updates. This OS was chosen for the following reasons: the maximum supported OS version by this hardware, availability of licenses to use this OS, availability of Network Load Balancing service for server balancing and clustering, availability of TFTP Protocol services for connecting clients themselves to the terminal server, classic Windows interface for the end user.

After the OS is installed, it is configured on each of the project servers.

Network adapters are configured. Each server is assigned a static IP address in accordance with the enterprise’s regulations. Each of the servers has two network adapters. But unfortunately, it was not possible to pair network adapters (NIC teaming) to improve fault tolerance. The reason is incorrect operation when connecting clients to the server cluster.

Since the enterprise has a domain infrastructure, servers are entered into the domain; each server is assigned a unique DNS name according to the regulations. This is done for the convenience of administration, use of domain automation policies and scripts, which significantly reduces the time for setting up and configuring hardware and user accounts of this project.

In the server Manager console, you can add the remote desktop Services, Windows deployment Services, and File server roles. These roles are necessary for correct operation and connection of the thin client on the terminal server. The remote desktop Services role is responsible for working correctly when connecting over the Microsoft RDP Protocol, the Windows deployment Services role is responsible directly for connecting thin clients to the server over the TFTP Protocol, and the File server role provides data storage and administration capabilities.

The remote desktop Services role must be installed with the remote desktop session host and remote desktop connection broker services. The Windows deployment Service role is installed in conjunction with the transport server service. The File server role is set by default with the suggested services.

In addition to installing roles and services for Windows Server 2008 R2 Standard, you must install the accompanying OS components. The following components are required: “network load balancing (NLB)”, “TFTP Client” and “Remote assistant”.

Network load balancing (NLB) is necessary to create a server cluster and improve the project’s fault tolerance. The principle of operation is based on distributing requests through one or more input nodes, which redirect them for processing to other computing nodes. The initial goal of such clusters is performance, but they often also use methods that improve reliability [4].

The “TFTP Client” component is used for various interactions with the enterprise infrastructure using the same Protocol.

The Remote assistant component is required for remote connection to an end-user session for providing technical support, configuring the desktop and SOFTWARE, and other consultations.

After successfully installing all the components, you need to create a fault-tolerant NLB cluster from these servers. To do this, NLB is enabled on the network adapter of each server in the properties. Then, in the NLB management console, a new server cluster is created, an IP address is assigned in accordance with the company’s regulations, both servers are added, network adapters are configured for the servers in the cluster, and the priority of each is specified, the first one will be the main one. This setting allows you to automatically manage the network load on each of the cluster servers. Respectively, connecting a thin client will automatically connect to a server that has less network load.

The software that provides connection of the thin client to the terminal server was selected OS Thinstation 5.1. Thinstation is a Linux distribution designed specifically for creating thin clients. It is a “stripped-down” Linux with pre-installed programs necessary for network operation[5].

The advantages of this OS are the free distribution of this product (open source, no need to purchase licenses), the small volume of its distribution, provides most remote access protocols, and has a modular structure. In addition, this OS showed the lowest connection speed to the terminal server – 52 seconds, in contrast to similar SOFTWARE.

When you installed the Windows deployment Service role and the accompanying Transport server role, a folder was created in the root of the TFTPRoot system local disk. Enterprise domain policies allow automatic downloads from this folder, the transport server service also starts automatically, and the OS firewall also has no restrictions on downloads and access. The Thinstation 5.1 OS files are placed in the TFTPRoot folder on each of the cluster servers.

The distribution kit of Thinstation 5.1 includes: the boot loader for PXE – pxelinux.0 kernel most Linux like vmlinuz; the file responsible for file system OS – initrd; the configuration file is the default boot loader for PXE – pxelinux.cfg/default file; the configuration file is OS – thinstation.conf.network; customized settings of a thin client of thinstation.hosts; other supporting files depending on the OS build (this build is minimal, considered the necessary files to the health of the OS).

Since the build with universal parameters was selected for loading thin clients, only the IP address of the download from the TFTP server is specified. In this project, the IP address will be the IP address of the NLB cluster. The thinstation.conf.network file changes the value of THE session_0_rdesktop_server parameter to the IP address of the server cluster. Other parameters (screen resolution, type of RDP connection, external devices of the thin client, etc.) are configured depending on the characteristics of the thin client itself. Accordingly, more advanced client configuration is performed.

A standard enterprise SOFTWARE package is installed on each of the cluster servers – Microsoft Office 2010, Foxit Reader, SP-Client (SOFTWARE that displays the characteristics of various technological nodes of the enterprise), DiGraph (SOFTWARE that displays graphs of steam and electricity generation).

For correct operation and the convenience of administration, a separate zone for thin clients is marked on the enterprise’s DHCP servers. This means that thin clients will get an IP address from the specified address pool when loading. For connecting a thin client to a terminal server, you must configure a DHCP options area in particular settings – 066 the server host Name and 067 boot file Name boot. Parameter 066 is set to the IP address of the server cluster, and parameter 067 is set to pxelinux.0 (thinstation 5.1 PXE boot loader file on the TFTP server). Also, to account for and improve fault tolerance, IP addresses of thin clients are reserved on the enterprise’s DHCP servers.

Based on data received from the enterprise’s management, user accounts are created in the domain’s Active Directory in a separate group. Separate group GPO policies are created to provide access to various enterprise infrastructure resources and services.

A group policy is created separately for creating roaming user profiles. A roaming user profile is a technology used in the Microsoft Windows family of operating systems that allows users connected to the Windows Server domain to access their profile when logging in to the operating system from various computers on the local network. The uploaded user profile includes program settings, documents, registry branches, and the desktop environment, including the location of icons on the desktop, and other settings. When you log out, all the changed profile settings and documents are synced with the server[6]. To do this, a folder for storing user profiles is created on the main file server of the enterprise. Access rights are configured on this folder – the user can only view the profile of their account. The GPO snap-in creates a corresponding policy that enables profile relocation technology, specifies the network path to the profile storage, and extends this policy to a specific group of users. This policy is intended for saving user profiles, regardless of the project hardware. Other policies have already been created and are applied in accordance with the company’s regulations.

The HP Compaq Pro 4000 SFF system units (Intel Pentium E5800 (3.2 GHz), 2 GB RAM, no integrated video adapter required, PXE-enabled network adapter) and 22-inch HP LA2205wg monitors were selected as the thin client. These PCs were decommissioned due to the upgrade of the user’s workplace equipment, and are also equipped with a network adapter that supports PXE technology, which is a prerequisite for downloading Thinstation 5.1 OS over the network. PXE (Preboot eXecution Environment) is an environment for booting a computer using a network card without using local data carriers (hard disk, USB drive, etc.). To organize the system to boot in PXE uses the protocols IP, UDP, BOOTP and TFTP[7]. To ensure the correct operation of this technology, the PC BIOS includes the PXE parameter and the Boot Lan Option Boot parameter, and the network adapter is set first in the priority of downloads.

Results of the research and discussions

At this point, the implementation and realization of the thin client project at the power enterprise have been completed and put into operation. Thin client technology has been successfully operating at the enterprise for about 2 years. During operation, it has proved to be a reliable tool with increased fault tolerance for operational monitoring of CHP processes.

This technology was implemented at the expense of existing IT-resources of the enterprise, and no money was spent on its implementation. As a result, the enterprise has saved about 2 million rubles allocated for the development of IT infrastructure, compared to the average cost of ready-made solutions from third-party hardware manufacturers and system integration companies.


Библиографическая ссылка

Balanov A.A., Zhidkov D.A., Kuligina N.O., Tarlakovskaya E.A. IMPLEMENTATION OF THIN CLIENT TECHNOLOGY BASED ON A FAULT-TOLERANT NLB CLUSTER FOR MONITORING PROCESSES OF POWER GENERATION ENTERPRISE // European Journal of Natural History. – 2022. – № 3. – С. 7-10;
URL: https://world-science.ru/ru/article/view?id=34271 (дата обращения: 22.11.2024).

Предлагаем вашему вниманию журналы, издающиеся в издательстве «Академия Естествознания»
(Высокий импакт-фактор РИНЦ, тематика журналов охватывает все научные направления)

«Фундаментальные исследования» список ВАК ИФ РИНЦ = 1,674