History of UNIX systems. Differences between UNIX and Linux

With a high level language. Around 1969, Ken Thompson, with the assistance of Dennis Ritchie, developed and implemented the Bi (B) language, which was a simplified version (for implementation on minicomputers) of the BCPL language developed in the language. Bi, like BCPL, was an interpreted language. Was released in 1972 second edition Unix rewritten in the Bi language. In 1969-1973 Based on Bi, a compiled language was developed, called C (C).

Split

An important reason for the Unix split was the implementation of the TCP/IP protocol stack in 1980. Before this, machine-to-machine communication in Unix was in its infancy - the most significant method of communication was UUCP (a means of copying files from one Unix system to another, originally operating over telephone networks using modems).

Two network application programming interfaces have been proposed: Berkley sockets and TLI (Transport Layer Interface).

The Berkley sockets interface was developed at the University of Berkeley and used the TCP/IP protocol stack developed there. TLI was created by AT&T according to the transport layer definition of the OSI model and first appeared in System V version 3. Although this version contained TLI and streams, it initially did not have an implementation of TCP/IP or other network protocols, but such implementations were provided by third parties .

The implementation of TCP/IP was officially and finally included in the base distribution of System V version 4. This, along with other considerations (mostly market ones), caused the final demarcation between the two branches of Unix - BSD (Berkeley University) and System V (commercial version from AT&T). Subsequently, many companies, having licensed System V from AT&T, developed their own commercial varieties of Unix, such as AIX, CLIX, HP-UX, IRIX, Solaris.

Modern Unix implementations are generally not pure V or BSD systems. They implement features of both System V and BSD.

Free Unix-like operating systems

Currently, GNU/Linux and members of the BSD family are rapidly taking over the market from commercial Unix systems and simultaneously penetrating both end-user desktop computers and mobile and embedded systems.

Proprietary systems

After the division of AT&T, the Unix trademark and the rights to the original source code changed hands several times, in particular, they belonged to Novell for a long time.

The influence of Unix on the evolution of operating systems

Unix systems are of great historical importance because they gave rise to some of the OS and software concepts and approaches that are popular today. Also, during the development of Unix systems, the C language was created.

Widely used in systems programming, the C language, originally created for the development of Unix, has surpassed Unix in popularity. The C language was the first “tolerant” language that did not try to impose one or another programming style on the programmer. C was the first high-level language to provide access to all processor capabilities such as references, tables, bit shifts, increments, etc. On the other hand, the freedom of the C language led to buffer overflow errors in C standard library functions such as gets and scanf. The result has been many notorious vulnerabilities, such as the one exploited by the famous Morris worm.

The early developers of Unix helped introduce the principles of modular programming and reuse into engineering practice.

Unix made it possible to use TCP/IP protocols on relatively inexpensive computers, which led to the rapid growth of the Internet. This, in turn, contributed to the rapid discovery of several major vulnerabilities in Unix security, architecture, and system utilities.

Over time, Unix's leading developers developed cultural norms for software development that became as important as Unix itself. ( )

Some of the most famous examples of Unix-like operating systems are macOS, whose? ] Unix of that time had disadvantages compared to these OSs (for example, the lack of serious database engines), it was: a) cheaper and sometimes free for academic institutions; b) was portable from equipment to equipment and developed in a portable C language, which “decoupled” the development of programs from specific equipment. In addition, the user experience turned out to be “decoupled” from the hardware and manufacturer - a person who worked with Unix on VAX could easily work with it on 68xxx, and so on.

Hardware manufacturers at that time often had a cool attitude towards Unix, considering it a toy, and offering their proprietary OS for serious work - primarily DBMS and business applications based on them in commercial structures. There are known comments on this matter from DEC regarding its VMS. Corporations listened to this, but not the academic environment, which had everything it needed in Unix, often did not require official support from the manufacturer, coping on its own, and valued the low cost and portability of Unix. Thus, Unix was perhaps the first OS portable to different hardware.

Unix's second major rise was the introduction of RISC processors around 1989. Even before that, there were so-called. workstations are high-power personal single-user computers that have sufficient memory, a hard drive and a sufficiently developed OS (multitasking, memory protection) to work with serious applications such as CAD. Among the manufacturers of such machines, Sun Microsystems stood out, making a name for itself on them.

Before the advent of RISC processors, these stations typically used a Motorola 680x0 processor, the same as in Apple computers (albeit with a more advanced operating system than Apple's). Around 1989, commercial implementations of RISC architecture processors appeared on the market. The logical decision of a number of companies (Sun and others) was to port Unix to these architectures, which immediately entailed the transfer of the entire software ecosystem for Unix.

Proprietary serious operating systems, such as VMS, began their decline precisely from this moment (even if it was possible to transfer the OS itself to RISC, everything was much more complicated with applications for it, which in these ecosystems were often developed in assembler or in proprietary languages ​​like BLISS ), and Unix became the OS for the most powerful computers in the world.

However, at this time the ecosystem began to move to a GUI in the form of Windows 3.0. The enormous advantages of the GUI, as well as, for example, unified support for all types of printers, were appreciated by both developers and users. This greatly undermined Unix's position in the PC market - implementations such as SCO and Interactive UNIX were unable to support Windows applications. As for the GUI for Unix, called X11 (there were other implementations, much less popular), it could not fully work on a regular user PC due to memory requirements - for normal operation X11 required 16 MB, while Windows 3.1 with it performed sufficiently well to run both Word and Excel simultaneously in 8 MB (this was the standard size of PC memory at that time). With high memory prices, this was a limiting factor.

The success of Windows gave impetus to Microsoft's internal project called Windows NT, which was API compatible with Windows, but at the same time had all the same architectural features of a serious OS as Unix - multitasking, full memory protection, support for multiprocessor machines, file access rights and directories, system log. Windows NT also introduced the journaled file system NTFS, which at that time exceeded in capabilities all file systems standardly supplied with Unix - analogues for Unix were only separate commercial products from Veritas and others.

Although Windows NT was not popular initially, due to its high memory requirements (the same 16 MB), it allowed Microsoft to enter the market for server solutions, such as database management systems. Many at the time did not believe in the ability of Microsoft, traditionally specializing in desktop software, to be a player in the enterprise software market, which already had its own big names such as Oracle and Sun. Adding to this doubt was the fact that the Microsoft DBMS - SQL Server - began as a simplified version of Sybase SQL Server, licensed from Sybase and 99% compatible in all aspects of working with it.

In the second half of the 1990s, Microsoft began to squeeze Unix in the corporate server market.

The combination of the above factors, as well as the collapse in prices for 3D video controllers, which went from professional equipment to home equipment, essentially killed the very concept of a workstation by the early 2000s.

In addition, Microsoft systems are easier to manage, especially in common use cases.

But at the moment the third sharp rise of Unix has begun.

A serious competitor to Linux at that time was FreeBSD, however, the “cathedral” style of development management as opposed to the “bazaar” style of Linux, as well as much greater technical archaism in issues such as support for multiprocessor machines and executable file formats, greatly slowed down the development of FreeBSD compared to Linux, making the latter the flagship of the free software world.

Subsequently, Linux reached new and new heights:

  • transfer of serious proprietary products such as Oracle;
  • IBM's serious interest in this ecosystem as the basis for its vertical solutions;
  • the emergence of analogues of almost all familiar programs from the Windows world;
  • refusal of some equipment manufacturers to require pre-installation of Windows;
  • release of netbooks with only Linux;
  • use as a kernel in Android.

Currently, Linux is a deservedly popular OS for servers, although much less popular on desktops.

If you have recently started learning Linux and getting comfortable in this vast universe, you have probably often come across the term Unix. Sounds a lot like Linux, but what does it mean? You are probably wondering how unix differs from linux. The answer to this question depends on what you understand by these words. After all, each of them can be interpreted differently. In this article, we'll look at a simplified history of Linux and Unix to help you understand what they are and how they are related. As always, feel free to ask questions or add additional information in the comments.

Unix began its history in the late 1960s and early 1970s at AT&T Bell Labs in the United States. Together with MIT and General Electric, Bell Labs research laboratory began developing a new operating system. Some researchers were dissatisfied with the progress of development of this operating system. They moved away from working on the main project and began developing their own OS. In 1970, this system was named Unix, and two years later it was completely rewritten in the C programming language.

This allowed Unix to be distributed and ported to various devices and computing platforms.

As Unix continued to develop, AT&T began selling licenses for its use in universities and also for commercial purposes. This meant that not everyone could, as now, freely change and distribute the code of the Unix operating system. Soon, many editions and variants of the Unix operating system began to appear, designed to solve various problems. The most famous of them was BSD.

Linux is similar to Unix in functionality and features, but not in code base. This operating system was assembled from two projects. The first is the GNU project, developed by Richard Stallman in 1983, the second is the Linux kernel, written by Linus Torvalds in 1991.

The goal of the GNU Project was to create a system similar to, but independent of, Unix. In other words, an operating system that did not contain Unix code and could be freely distributed and modified without restrictions, like free software. Since the free Linux kernel could not run on its own, the GNU project merged with the Linux kernel, and the Linux operating system was born.

Linux was designed under the influence of the Minix system, a descendant of Unix, but all the code was written from scratch. Unlike Unix, which was used on servers and large mainframes of various enterprises, Linux was designed for use on a home computer with simpler hardware.

Today, Linux runs on a huge number of platforms, more than any other OS, including servers, embedded systems, microcomputers, modems and even mobile phones. Now the difference between linux and unix will be discussed in more detail.

What is Unix

The term Unix can refer to the following concepts:

  • The original operating system developed at AT&T Bell Labs, on the basis of which other operating systems are developed.
  • Trademark, written in capital letters. UNIX belongs to The Open Group, which has developed a set of standards for operating systems - the Single UNIX Specification. Only those systems that comply with the standards can legitimately be called UNIX. Certification is not free and requires developers to pay to use the trademark.
  • All operating systems are registered with the name Unix. Because they meet the above mentioned standards. These are AIX, A/UX, HP-UX, Inspur K-UX, Reliant UNIX, Solaris, IRIX, Tru64, UnixWare, z/OS and OS X - yes, even those that run on Apple computers.

What is Linux

The term Linux refers only to the kernel. An operating system is not complete without a desktop environment and applications. Since most applications were developed and are currently being developed under the GNU Project, the full name of the operating system is GNU/Linux.

Nowadays, many people use the term Linux to refer to all distributions based on the Linux kernel. Currently, the newest version of the Linux kernel is 4.4, version 4.5 is under development. The numbering of kernel releases was changed from 3.x to 4.x not too long ago.

Linux is a Unix-like operating system that behaves like Unix but does not contain its code. Unix-like operating systems are often called Un*x, *NIX and *N?X, or even Unixoids. Linux doesn't have Unix certification, and GNU stands for GNU not Unix, so in that respect Mac OS X is more Unix than Linux. Nevertheless, the Linux kernel and the GNU Linux OS are very similar to Unix in functionality and implement most of the principles of the Unix philosophy. This includes human-readable code, storing system configuration in separate text files, and the use of small command line tools, a graphical shell, and a session manager.

It is important to note that not all Unix-like systems have received UNIX certification. In a certain context, all operating systems based on UNIX or its ideas are called UNIX-like, regardless of whether they have a UNIX certificate or not. In addition, they can be commercial and free.

I hope it is now clearer how unix differs from linux. But let's go even further and summarize.

Main differences

  • Linux is a free and open source operating system, while the original Unix is ​​not, except for some of its derivatives.
  • Linux is a clone of the original Unix, but it does not contain its code.
  • The main difference between unix and linux is that Linux is only a kernel, while Unix was and is a full-fledged operating system.
  • Linux was developed for personal computers. And Unix is ​​aimed primarily at large workstations and servers.
  • Today Linux supports more platforms than Unix.
  • Linux supports more types of file systems than Unix.

As you can see, the confusion usually arises because linux vs unix can mean completely different things. Whatever meaning is intended, the fact remains that Unix came first and Linux came later. Linux was born out of a desire for software freedom and portability, inspired by the Unix approach. It's safe to say that we all owe a debt to the free software movement, because the world would be a much worse place without it.

Sandbox

iron man March 19, 2011 at 11:16 pm

How does Linux differ from UNIX, and what is a UNIX-like OS?

UNIX
UNIX (not worth it confused with the definition of “UNIX-like operating system”) - a family of operating systems (Mac OS X, GNU/Linux).
The first system was developed in 1969 at Bell Laboratories, a former American corporation.

Distinctive features of UNIX:

  1. Easy system configuration using simple, usually text, files.
  2. Extensive use of the command line.
  3. Use of conveyors.
Nowadays, UNIX is used mainly on servers and as a system for hardware.
It is impossible not to note the enormous historical importance of UNIX systems. They are now recognized as one of the most historically important operating systems. During the development of UNIX systems, the C language was created.

UNIX variants by year

UNIX-like OS
UNIX-like OS (Sometimes use the abbreviation *nix) - a system formed under the influence of UNIX.

The word UNIX is used as a mark of conformity and as a trademark.

The Open Group consortium owns the "UNIX" trademark, but is best known as the certifying authority for the UNIX trademark. Recently, The Open Group shed light on the publication of the Single UNIX Specification, the standards that an operating system must meet in order to be proudly called Unix.

You can take a look at the family tree of UNIX-like operating systems.

Linux
Linux- the general name for UNIX-based operating systems that were developed within the framework of the GNU project (open source software development project). Linux runs on a huge variety of processor architectures, ranging from ARM to Intel x86.

The most famous and widespread distributions are Arch Linux, CentOS, Debian. There are also many “domestic”, Russian distributions - ALT Linux, ASPLinux and others.

There is quite a bit of controversy about the naming of GNU/Linux.
Supporters of "open source" use the term "Linux", and supporters of "free software" use "GNU/Linux". I prefer the first option. Sometimes, for the convenience of representing the term GNU/Linux, the spellings “GNU+Linux”, “GNU-Linux”, “GNU Linux” are used.

Unlike commercial systems (MS Windows, Mac OS X), Linux does not have a geographical development center and a specific organization that owns the system. The system itself and the programs for it are the result of the work of huge communities, thousands of projects. Anyone can join the project or create their own!

Conclusion
Thus, we learned the chain: UNIX -> UNIX-like OS -> Linux.

To summarize, I can say that the differences between Linux and UNIX are obvious. UNIX is a much broader concept, the foundation for the construction and certification of all UNIX-like systems, and Linux is a special case of UNIX.

Tags: unix, linux, nix, Linux, unix

This article is not subject to comment because its author is not yet

This system has stood the test of time and survived.

In relation to this system, a system of standards has been developed:

POSIX 1003.1-1988, 1990 - describes UNIX OS system calls (system entry points)

(Application Programming Interface - API)

POSIX 1003.2-1992 - defines the command interpreter and set of utilities for the UNIX OS

POSIX 1003.1b-1993 - additions related to real-time applications

X/OPEN - a group coordinating the development of standards for the UNIX OS

Distinctive features of unix OS

    The system is written in a high-level language (C), which makes it easy to understand, change and transfer to other hardware platforms. UNIX is one of the most open systems.

    UNIX is a multitasking, multiuser system with a wide range of services. One server can serve requests from a large number of users. This requires administration of only one user system.

    Availability of standards. Despite the variety of versions, the basis of the entire UNIX family is a fundamentally identical architecture and a number of standard interfaces, which simplifies the transition of users from one system to another.

    Simple yet powerful modular user interface. There is a certain set of utilities, each of which solves a highly specialized problem, and from them it is possible to construct complex software processing systems.

    Using a single hierarchical, easily maintained file system that provides access to data stored in files on disk and to computer devices through a unified file system interface.

    Quite a large number of applications, including freely distributed ones.

Basic architecture of the unix operating system Model of the unix system.

Unix OS kernel structure.

UNIX is a two-tier system model: the kernel and applications.

The kernel directly interacts with the computer hardware, isolating application programs from the hardware features of the computing system.

The kernel has a set of services provided to application programs. These include input/output operations, creation and control of processes, interaction between processes, signals, etc.

All applications request kernel services through the calling system.

The second level consists of applications or tasks, both system ones, which determine the overall functionality of the system, and application ones, which provide the UNIX user interface. The interaction scheme of all applications with the kernel is the same.

Core provides the basic functionality of the operating system, creates and manages processes, allocates memory, and provides access to files and peripheral devices. Interaction of application tasks with the kernel occurs through a standard system call interface. The system call interface represents a set of kernel services and defines the format for requesting services.

A process requests a service from a specific procedure through a standardized system call that looks similar to a regular C library function call. The kernel processes the request on behalf of the process and returns the necessary data to the process.

The core consists of three main subsystems:

1) file subsystem;

2) input-output subsystem;

3) process and memory management subsystem.

File subsystem provides a unified interface for accessing data located on disk drives and peripheral devices. The same write/read functions can be used when working with files on disks and when entering/outputting data to a terminal, printer and other external devices.

The file subsystem controls file access rights, performs file placement and deletion operations, and writes and reads data.

Since most application functions use the file system interface in their work, file access rights largely determine the user's access privileges to the system. Thus, the privileges of individual users are formed.

There are 3 user categories associated with each file:

Owner;

Owning group;

Other users.

The file subsystem provides redirection of requests addressed to peripheral devices corresponding to the input/output subsystem modules.

The I/O subsystem processes requests from the file subsystem and the process control subsystem to access peripheral devices, provides the necessary data buffering and interacts with device drivers.

Drivers are special kernel modules that directly serve external devices.

Process and memory management subsystem controls the creation and deletion of processes, the distribution of system resources, memory and processor between processes, process synchronization, and interprocessor communication.

System resources are allocated by a special kernel task called planner processes. The scheduler starts system processes and ensures that the process does not take over shared system resources.

Memory management module provides placement of RAM for applied tasks, including virtual memory. This means that it provides the ability to place part of a process in secondary memory (i.e. the hard drive) and move it into RAM as needed.

A process releases the processor before a long I/O operation or when the time slice expires. In this case, the scheduler selects the next highest priority process and starts it for execution.

Interprocessor communication module is responsible for notifying processes about events using signals and providing the ability to transfer data between different processes.

Brief information about the development of the UNIX OS

UNIX OS appeared in the late 60s as an operating system for the PDP-7 minicomputer. Kenneth Thomson and Dennis Ritchie took an active part in the development.

Features of the UNIX OS include: multi-user mode, new file system architecture, etc.

In 1973, most of the OS kernel was rewritten in the new C language.

Since 1974, the UNIX OS has been distributed in source code at universities in the United States.

UNIX versions

From the very beginning of the spread of UNIX, various versions of the OS began to appear in American universities.

To streamline, AT&T in 1982 combined several versions into one and called the OS version System III. A commercial version, System V, was released in 1983. In 1993, AT&T sold its UNIX rights to Novell, which then sold it to the X/Open and Santa Cruz Operation (SCO) consortium.

Another line of UNIX OS, BSD, is being developed at the University of California (Berkeley). There are free versions of FreeBSD and OpenBSD.

The OSF/1 family - Open Software Foundation - includes operating systems from the consortium of IBM, DEC and Hewlett Packard. The operating systems of this family include HP-UX, AIX, Digital UNIX.

Free versions of UNIX operating systems

There are a large number of free versions of UNIX.

FreeBSD, NetBSD, OpenBSD– options developed on the basis of BSD OS.

The most popular family of free UNIX systems is the family of systems Linux. The first variant of Linux was developed by Linus Torvalds in 1991. Currently, there are several variants Linux: Red Hat, Mandrake, Slackware, SuSE, Debian.

General features of UNIX systems

The different flavors of UNIX share a number of common features:

Time-sharing multiprogramming based on preemptive multitasking;

Support for multi-user mode;

Using virtual memory and swap mechanisms;

Hierarchical file system;

Unification of input/output operations based on the expanded use of the concept of file;

System portability;

Availability of network means of interaction.

Advantages of UNIX systems

The advantages of the UNIX family of operating systems include:



Portability;

Effective implementation of multitasking;

Openness;

Availability and strict adherence to standards;

Unified file system;

Powerful command language;

Availability of a significant number of software products;

Implementation of the TCP/IP protocol stack;

Ability to work as a server or workstation.

UNIX-based servers

Server is a computer that processes requests from other computers on the network and provides its own resources for storing, processing and transmitting data. A server running UNIX can perform the following roles:

File server;

Web server;

Mail server;

Remote registration (authentication) server;

Auxiliary Web services servers (DNS, DHCP);

Internet access server

Managing a UNIX Computer

When working with a UNIX system in server mode, as a rule, remote access mode is used using some terminal program.

A work session begins by entering a login name and access password

Often, to solve server management problems, they are limited to the command mode of operation. In this case, control is performed by entering special commands into the command line in a special format. The command line has a special prompt, for example:

General view of the command:

  1. -bash-2.05b$ command [options] [options]

For example, calling OS help looks like this:

  1. -bash-2.05b$ man [keys] [topic]
  2. For help on using the man command, type
  3. -bash-2.05b$ man man

Command Line Interpretation

The following conventions are used when entering commands:

The first word on the command line is the command name;

The remaining words are arguments.

Among the arguments, keys (options) are highlighted - words (symbols) predefined for each command, starting with one (short format) or a pair of hyphens (long format). For example:

Bash-2.05b$ tar –c –f arch.tar *.c

Bash-2.05b$ tar - -create - -file=arch.tar *.c

When specifying options, they can be combined. For example, the following commands are equivalent:

Bash-2.05b$ ls –a –l

Bash-2.05b$ ls –l –a

Bash-2.05b$ ls –al

Other arguments indicate the objects on which the operations are performed.

Shell Variables

When working in the system, there is a way to pass parameters to programs, in addition to using command shell switches, - using environment variables. To set an environment variable, use the set command. Command Format:

Bash-2.05b$ set variable_name=value

Removing an environment variable is done with the unset command.

To access the value of a variable, use the notation $variable_name, for example the command:

Bash-2.05b$ echo $PATH

Prints the value of the PATH variable.