Geek News

Installing Linux defining distros which version should you choose

ComputerGuru -

In April 1991, Linus Torvalds, at the time a 21 year old computer science student at the University of Helsinki, Finland, started working on some simple ideas for an operating system. Although the desktop computer market exploded throughout the 1990s, the Linux Operating System remained pretty much the domain of geeks who like to build their own computers. I really believed that more than 20 years later we would have Linux computers in our home as common as Windows or Apple varieties.

The only dent in the domination of Windows or Apple desktop computers in recent years has been the introduction of the Chromebook as a personal computer in 2011. The Chrome operating system is a strange mix of the Linux kernel and using the Google Chrome web browser as a user interface.

The Linux operating system has come a long way since the mid 1990s. From painful experiences with using floppy disks and hunting down hardware drivers, my experiences with installing many distributions of Linux in recent years has been pretty painless.

The Linux kernel

Just as I did with answering the question, "what is the best desktop computer operating system," I am going to generalize a bit here so we don't get too deep into the geek speak. Hopefully the tech purists won't beat me up too much for generalizing. Let's begin with quickly going over the basic definitions.

Think of the Linux kernel as an automobile engine and drive train that was designed by a community. Once the engine and drive train have been developed there are groups that split off and design their own version of an automobile. Each of these automotive design groups have their own community with goals for how they want to use their finished product, some may focus on style and looks, another group may want to focus on being practical and functional. Once the group has a general purpose in mind, they will form an online community where they can share ideas in creating a finished product.

The Linux Distro

Each customized version of Linux that adds additional modules and applications is supported by an online community offering internet downloads as well as support. You will see the question phrased as which Linux distro should you use. Distro is a shortened version of the term distribution. There are many distros of the Linux family all based on the same Linux kernel, the core of the computer operating system. There are geeks who swear by which is the best Linux distro, but in the end it is a matter of what works best for you.

When it comes to comparing the various distributions, I find "the big three" to be very similar, because in reality they are variations of the same family. As of the time of this update, March 2017, based on various statistics the most popular version of Linux is Mint, with Debian coming in second, followed by Ubuntu. Mint is a fork from Ubuntu, which is itself a fork from Debian. Mint is very similar indeed to Ubuntu. Mint was forked off Ubuntu with the goal of providing a familiar desktop graphical user interface.

First answer the question, why are you looking at Linux? Do you have an old computer with an outdated operating system that you are looking to upgrade? Or perhaps you just want to see what all the fuss is about with the "free" alternative to Windows or Apple?

If you simply want to play with Linux and just want to see what all the fuss is about, Mint is a very easy place to start. I have installed Mint on a few old computers with no issues. One of the biggest issues I have experienced with many versions of Linux is the lack of drivers for certain pieces of hardware in some laptop models. There's a few old Dell laptops I moved on from installing Linux because finding drivers for the Wi-Fi was not worth the effort.

Here's a look at various distributions of Linux.

In our previous question on "what is the best desktop computer operating system" we addressed the topic of the "free" alternative to Windows or Apple as we explained Open Source software. Richard Stallman, the father of the Open Source software movement, explains that Open Source refers to the preservation of the freedoms to use, study, distribute and modify that software, not zero-cost. In illustrating the concept of Gratis versus Libre, Stallman is famous for using the sentence, "free as in free speech not as in free beer." Even though Linux is open source there are versions that are commercially distributed and supported.

Fedora - Red Hat

Red Hat Commercial Linux, introduced in 1995, was one of the first commercially supported versions of Linux, and entered into the enterprise network environment because of its support. Red Hat Linux has evolved quite a bit over the years as Red Hat Linux merged with the community based Fedora Project in 2003.

Fedora is now the free community supported home version of Red Hat Linux. Fedora ranks slightly behind the other distros we mention here in popularity, Fedora is often at the top of list when it comes to integrating new package versions and technologies into the distribution. Many users in the enterprise environment rave about the stability of Fedora.

SUSE - openSUSE

openSUSE claims to be "the makers' choice for sysadmins, developers and desktop users." You may not find a lot of neighborhood geeks telling you to try openSUSE but it ranks near the top of many charts as far as popularity. SUSE was marketing Linux to the enterprise market in 1992, before Red Hat. Many American geeks are not as familiar with SUSE because it was developed in Germany. I have not had any issues with installing it. You can always download a "live CD" which allows you to run the operating system off of the CD without having to install it

openSUSE is the open source version. SUSE is often used in commercial environments because professional help is available under a support contract through SUSE Linux. Having worked as a Novell Netware systems administrator I was involved with SUSE Linux as the Novell Netware network operating system was coming to the end of its life when Novell bought the SUSE brands and trademarks in 2003. When Novell was purchsed by The Attachmate Group in 2011, SUSE was spun off as an independent business unit. SUSE is geared for the business environment with SUSE Linux Enterprise Server and SUSE Linux Enterprise Desktop. Each focuses on packages that fit its specific purpose.

Debian - Ubuntu - Mint

Ubuntu and Mint are Debian-based: their package manager is APT (The Advanced Package Tool) a free software user interface that works with core libraries to handle the installation and removal of software on the Debian Linux distributions. Their packages follow the DEB (Debian) package format.

Ubuntu is often used in commercial environments because professional help is available under a support contract through Canonical, the company behind Ubuntu.

Mint is basically the same OS as Debian or Ubuntu with a different default configuration with a lot of pre-installed applications and a nice looking desktop. Mint was forked off from the Ubuntu community with the goal of providing a familiar desktop Operating System.  If you are looking for something to use as a server Debian or Ubuntu may be a better choice.


What about all the rest?

There are more that 200 different versions of Linux. Once you go beyond the versions mentioned here you are getting into support issues. With each of the three families of Linux we mention here, there is a commercially supported version and a community supported version. Keep in mind, if you are not buying support through one of the commercial versions mentioned here, each of these families have a well established online community for support of the open source version.

Is it time to switch to Linux?

Back in the late 1990s I was taking a community college course on Novell networking and systems administration using Novell Netware. As part of the curriculum we had to write a term paper on a unrelated technology topic, I chose Linux on the desktop. I concluded that I was impressed with Linux as an operating system, but it would not become mainstream desktop operating system until there were hardware companies embracing it and selling home computers with Linux installed. Twenty years later, that really has not happened.

You could make the case that the Google Chromebook is a version of Linux installed and configured along with a computer, but the Google Chromebook has not become a mainstream home computer. If all you want to do is surf the net, interact on social media, and read your email, a Google Chromebook works fine. But beyond that there are many issues.

Hardware drivers and website plugins can be a problem when using any version of Linux. Many manufacturers don't develop Linux device drivers for their hardware, you need to search them out yourself through your LInux community. Using many websites that need Digital Rights Management, like Amazon Video, Netflix, or Sling, getting your streaming to work on Linux can be difficult. Some websites don't understand Linux as an operating system and automatic installs of plugins fail.

I know I said at the beginning of this discussion that in recent years my experinece in installing Linux has been pretty painless, but I have access to name brand hardware on pretty basic computers.  The problem with hardware drivers and browser plug ins keeps improving, but beware it can be an issue at times.  It is still a concern that can turn your Linux experince sour. The biggest problem I have experienced in experimenting with Linux is network card and WiFi drivers in laptop models.

In our last article we discussed why is Microsoft Windows so popular. Whether you love them or hate them, many applications only have a Windows version. There are many websites that offer "open source equivalents” to your favorite applications. Some equivalents work well, others are very buggy. The key to using any open source application is looking at how active is the community that supports them. Be cautious of applications that look cool and work well, but are basically created and supported by a single individual. They can often become unsupported as developer creates an application and moves on without supporting it over time.

Take Linux for a test drive

Look for a live distribution of Linux that allows you to run a full instance of the operating system from either CD, DVD, or USB, without making changes to your current system. Many install downloads will offer you a live test drive of the distro that does not install anything to your hard drive. If everything works well from a live test drive, you can feel a bit more comfortable about doing the "real" install.

Tags: 

Installing Linux defining distros which version should you choose

ComputerGuru -

In April 1991, Linus Torvalds, at the time a 21 year old computer science student at the University of Helsinki, Finland, started working on some simple ideas for an operating system. Although the desktop computer market exploded throughout the 1990s, the Linux Operating System remained pretty much the domain of geeks who like to build their own computers. I really believed that more than 20 years later we would have Linux computers in our home as common as Windows or Apple varieties.

The only dent in the domination of Windows or Apple desktop computers in recent years has been the introduction of the Chromebook as a personal computer in 2011. The Chrome operating system is a strange mix of the Linux kernel and using the Google Chrome web browser as a user interface.

The Linux operating system has come a long way since the mid 1990s. From painful experiences with using floppy disks and hunting down hardware drivers, my experiences with installing many distributions of Linux in recent years has been pretty painless.

The Linux kernel

Just as I did with answering the question, "what is the best desktop computer operating system," I am going to generalize a bit here so we don't get too deep into the geek speak. Hopefully the tech purists won't beat me up too much for generalizing. Let's begin with quickly going over the basic definitions.

Think of the Linux kernel as an automobile engine and drive train that was designed by a community. Once the engine and drive train have been developed there are groups that split off and design their own version of an automobile. Each of these automotive design groups have their own community with goals for how they want to use their finished product, some may focus on style and looks, another group may want to focus on being practical and functional. Once the group has a general purpose in mind, they will form an online community where they can share ideas in creating a finished product.

The Linux Distro

Each customized version of Linux that adds additional modules and applications is supported by an online community offering internet downloads as well as support. You will see the question phrased as which Linux distro should you use. Distro is a shortened version of the term distribution. There are many distros of the Linux family all based on the same Linux kernel, the core of the computer operating system. There are geeks who swear by which is the best Linux distro, but in the end it is a matter of what works best for you.

When it comes to comparing the various distributions, I find "the big three" to be very similar, because in reality they are variations of the same family. As of the time of this update, March 2017, based on various statistics the most popular version of Linux is Mint, with Debian coming in second, followed by Ubuntu. Mint is a fork from Ubuntu, which is itself a fork from Debian. Mint is very similar indeed to Ubuntu. Mint was forked off Ubuntu with the goal of providing a familiar desktop graphical user interface.

First answer the question, why are you looking at Linux? Do you have an old computer with an outdated operating system that you are looking to upgrade? Or perhaps you just want to see what all the fuss is about with the "free" alternative to Windows or Apple?

If you simply want to play with Linux and just want to see what all the fuss is about, Mint is a very easy place to start. I have installed Mint on a few old computers with no issues. One of the biggest issues I have experienced with many versions of Linux is the lack of drivers for certain pieces of hardware in some laptop models. There's a few old Dell laptops I moved on from installing Linux because finding drivers for the Wi-Fi was not worth the effort.

Here's a look at various distributions of Linux.

In our previous question on "what is the best desktop computer operating system" we addressed the topic of the "free" alternative to Windows or Apple as we explained Open Source software. Richard Stallman, the father of the Open Source software movement, explains that Open Source refers to the preservation of the freedoms to use, study, distribute and modify that software, not zero-cost. In illustrating the concept of Gratis versus Libre, Stallman is famous for using the sentence, "free as in free speech not as in free beer." Even though Linux is open source there are versions that are commercially distributed and supported.

Fedora - Red Hat

Red Hat Commercial Linux, introduced in 1995, was one of the first commercially supported versions of Linux, and entered into the enterprise network environment because of its support. Red Hat Linux has evolved quite a bit over the years as Red Hat Linux merged with the community based Fedora Project in 2003.

Fedora is now the free community supported home version of Red Hat Linux. Fedora ranks slightly behind the other distros we mention here in popularity, Fedora is often at the top of list when it comes to integrating new package versions and technologies into the distribution. Many users in the enterprise environment rave about the stability of Fedora.

SUSE - openSUSE

openSUSE claims to be "the makers' choice for sysadmins, developers and desktop users." You may not find a lot of neighborhood geeks telling you to try openSUSE but it ranks near the top of many charts as far as popularity. SUSE was marketing Linux to the enterprise market in 1992, before Red Hat. Many American geeks are not as familiar with SUSE because it was developed in Germany. I have not had any issues with installing it. You can always download a "live CD" which allows you to run the operating system off of the CD without having to install it

openSUSE is the open source version. SUSE is often used in commercial environments because professional help is available under a support contract through SUSE Linux. Having worked as a Novell Netware systems administrator I was involved with SUSE Linux as the Novell Netware network operating system was coming to the end of its life when Novell bought the SUSE brands and trademarks in 2003. When Novell was purchsed by The Attachmate Group in 2011, SUSE was spun off as an independent business unit. SUSE is geared for the business environment with SUSE Linux Enterprise Server and SUSE Linux Enterprise Desktop. Each focuses on packages that fit its specific purpose.

Debian - Ubuntu - Mint

Ubuntu and Mint are Debian-based: their package manager is APT (The Advanced Package Tool) a free software user interface that works with core libraries to handle the installation and removal of software on the Debian Linux distributions. Their packages follow the DEB (Debian) package format.

Ubuntu is often used in commercial environments because professional help is available under a support contract through Canonical, the company behind Ubuntu.

Mint is basically the same OS as Debian or Ubuntu with a different default configuration with a lot of pre-installed applications and a nice looking desktop. Mint was forked off from the Ubuntu community with the goal of providing a familiar desktop Operating System.  If you are looking for something to use as a server Debian or Ubuntu may be a better choice.


What about all the rest?

There are more that 200 different versions of Linux. Once you go beyond the versions mentioned here you are getting into support issues. With each of the three families of Linux we mention here, there is a commercially supported version and a community supported version. Keep in mind, if you are not buying support through one of the commercial versions mentioned here, each of these families have a well established online community for support of the open source version.

Is it time to switch to Linux?

Back in the late 1990s I was taking a community college course on Novell networking and systems administration using Novell Netware. As part of the curriculum we had to write a term paper on a unrelated technology topic, I chose Linux on the desktop. I concluded that I was impressed with Linux as an operating system, but it would not become mainstream desktop operating system until there were hardware companies embracing it and selling home computers with Linux installed. Twenty years later, that really has not happened.

You could make the case that the Google Chromebook is a version of Linux installed and configured along with a computer, but the Google Chromebook has not become a mainstream home computer. If all you want to do is surf the net, interact on social media, and read your email, a Google Chromebook works fine. But beyond that there are many issues.

Hardware drivers and website plugins can be a problem when using any version of Linux. Many manufacturers don't develop Linux device drivers for their hardware, you need to search them out yourself through your LInux community. Using many websites that need Digital Rights Management, like Amazon Video, Netflix, or Sling, getting your streaming to work on Linux can be difficult. Some websites don't understand Linux as an operating system and automatic installs of plugins fail.

I know I said at the beginning of this discussion that in recent years my experinece in installing Linux has been pretty painless, but I have access to name brand hardware on pretty basic computers.  The problem with hardware drivers and browser plug ins keeps improving, but beware it can be an issue at times.  It is still a concern that can turn your Linux experince sour. The biggest problem I have experienced in experimenting with Linux is network card and WiFi drivers in laptop models.

In our last article we discussed why is Microsoft Windows so popular. Whether you love them or hate them, many applications only have a Windows version. There are many websites that offer "open source equivalents” to your favorite applications. Some equivalents work well, others are very buggy. The key to using any open source application is looking at how active is the community that supports them. Be cautious of applications that look cool and work well, but are basically created and supported by a single individual. They can often become unsupported as developer creates an application and moves on without supporting it over time.

Take Linux for a test drive

Look for a live distribution of Linux that allows you to run a full instance of the operating system from either CD, DVD, or USB, without making changes to your current system. Many install downloads will offer you a live test drive of the distro that does not install anything to your hard drive. If everything works well from a live test drive, you can feel a bit more comfortable about doing the "real" install.

Tags: 

Desktop personal computer system basic parts defined

ComputerGuru -

If you are studying personal computers as the beginning of your career in technology, or perhaps you are just trying to understand how things work on your home computer to better deal with problems and upgrades, you can't get away with not knowing some very basic definitions of the components of a desktop personal computer system.

There have been so many types of hardware and software over the years, keeping up to date on what is current is a full time job for many computer support technicians. This section is meant to be a brief introduction to common personal computer terms, we are only introducing you briefly to the basics.

If you are interested to learn more, many of the topics described here are covered in more detail throughout the websites of the Guru42 Universe. Over at our sister site GeekHistory.com we explore the history of technology and the evolution of personal computers.

Basic parts defined

Computer hardware is the collection of physical elements that make up a computer system such as a hard disk drive (HDD), monitor, mouse, keyboard, CD-ROM drive, network card, system board, power supply, case, and video card.

The main system board is sometimes called the motherboard. It is the central printed circuit board (PCB) in and holds many of the crucial components of the system, providing connectors for other peripherals.

The central processing unit (CPU), the brain of a computer system is the main component on the main system board. The CPU carries out the instructions of computer programs, performs the basic arithmetical, logical, and input/output operations of the system.

System boards will have expansion slots, a CPU socket or slot, location for memory cache and RAM, and a keyboard connector. Other components may also be present. A slot is a narrow notch, groove, or opening. A socket is a hollow piece or part into which something fits. Systemboards contain both sockets and slots, which are the points at which devices can be plugged in. A CPU slot is long and narrow while a CPU socket is square.

RAM (Random Access Memory), is the computer's primary storage which holds programming code and data that is being processed by the CPU.

ROM is read-only memory. ROM chips, located on circuit boards, are used to hold programming code that is permanently stored on the chip.

Flash ROM can be reprogrammed whereas regular ROM cannot be. In order to change the programming code of regular ROM, the chip must be replaced. Upgrades to Flash ROM can be downloaded from the Internet.

BIOS stands for basic input-output system. It is used to manage the startup of the computer and ongoing input and output operations of basic components, such as a floppy disk or hard drive.

Software

Computer software is a collection of computer programs and related data that provide the instructions for telling a computer what to do.

System software provides the basic functions for computer usage and helps run the computer hardware. An operating system is a type of software that controls a computers output and input operations, such as saving files and managing memory. Common operating systems are typically Windows based, but personal computers can also use an Apple or Linux based operating system as well.

Software applications represent a variety of computer programs. Some applications such as computer games are for the entertainment of the computer user. Other applications such as word processors are used for creating documents or spreadsheet programs that are computerized simulations of paper accounting worksheets.

Data storage

The term data is used to describe the files created by the applications. On the typical home computer you have various data files such as the documents created by your word processors, as well as music and movies that you have downloaded in the form of various types of audio and video files.

There are many types of data storage devices. A hard disk drive (HDD) is called secondary storage while memory is called primary storage because programs cannot be executed from secondary storage but must first be moved to primary storage. Basically, the CPU cannot "reach" the program still in secondary storage for execution.

As the personal computer has evolved over the years, so has the many forms of storage devices used to remove the data from your computer for storage. Early home computers had floppy disk drives which used various forms of diskettes based on magnetic storage.

The next generation of data storage devices were optical disc technologies, first with the Compact disc (CD) and later with the digital versatile disc (DVD).

USB flash drives are now commonly used for storage, data back-up and transfer of computer files. The USB flash drive has been replacing all other forms of data storage devices in recent years.

Peripherals

A home computer system is a combination of hardware and software components. Computer hardware describes the physical parts or components of a home computer system.

Computer peripherals are various devices used to put information into and get information out of your computer. Keyboards, mouse, scanners, digital cameras and joysticks are examples of input devices. Displays, printers, projectors, and speakers are examples of output devices.

What is the difference between a PC (personal computer) and a workstation

In a business environment you may have a computer on your desk that is very similar to the computer you have at home, but there is one major difference, the work computer is managed as part of a LAN (local area network) that contains many other computers. In the next section we define networking terms and go into a bit more detail on the concept of a LAN.

Some definitions will state that a workstation computer is faster and more powerful than a personal computer. Not necessarily. Terms like "faster and more powerful" are pretty ambiguous. The difference is a bit more clear-cut, it is a point of reference in how they are used.

In your home you have a personal computer, it is the center of your personal technology universe. When you open up an application, it is on that computer. When you create a data file, like a Word document, you save it to that computer.

When you open up an application, it may be installed on your local computer, or it may be installed on an application server somewhere on your LAN. When you create a data file on your workstation, like a Word document, you save it to your personal directory on a file server that is on your LAN.

Many years ago when computer systems were expensive, all the work was done on a mainframe, a huge computer surrounded by geeks in a special room. The end users had dumb terminals, meaning there was a keyboard and a monitor at your desk, but the box they attached to on your desk was called a dumb terminal because it did not do any work, it was dumb!

The concept of the workstation is that some of the "work" is done locally at your desktop, but some of the work could also be done on a computer somewhere else, in the case of the LAN, that somewhere else would be a server.

 

Tags: 

Desktop personal computer system basic parts defined

ComputerGuru -

If you are studying personal computers as the beginning of your career in technology, or perhaps you are just trying to understand how things work on your home computer to better deal with problems and upgrades, you can't get away with not knowing some very basic definitions of the components of a desktop personal computer system.

Computer hardware is the collection of physical elements that make up a computer system such as a hard disk drive (HDD), monitor, mouse, keyboard, CD-ROM drive, network card, system board, power supply, case, and video card.

The main system board is sometimes called the motherboard. It is the central printed circuit board (PCB) in and holds many of the crucial components of the system, providing connectors for other peripherals.

The central processing unit (CPU), the brain of a computer system is the main component on the main system board. The CPU carries out the instructions of computer programs, performs the basic arithmetical, logical, and input/output operations of the system.

System boards will have expansion slots, a CPU socket or slot, location for memory cache and RAM, and a keyboard connector. Other components may also be present. A slot is a narrow notch, groove, or opening. A socket is a hollow piece or part into which something fits. Systemboards contain both sockets and slots, which are the points at which devices can be plugged in. A CPU slot is long and narrow while a CPU socket is square.

RAM (Random Access Memory), is the computer's primary storage which holds programming code and data that is being processed by the CPU.

A hard disk drive (HDD) is called secondary storage while memory is called primary storage because programs cannot be executed from secondary storage but must first be moved to primary storage. Basically, the CPU cannot "reach" the program still in secondary storage for execution.

ROM is read-only memory. ROM chips, located on circuit boards, are used to hold programming code that is permanently stored on the chip.

Flash ROM can be reprogrammed whereas regular ROM cannot be. In order to change the programming code of regular ROM, the chip must be replaced. Upgrades to Flash ROM can be downloaded from the Internet.

BIOS stands for basic input-output system. It is used to manage the startup of the computer and ongoing input and output operations of basic components, such as a floppy disk or hard drive.

Computer software is a collection of computer programs and related data that provide the instructions for telling a computer what to do.

System software provides the basic functions for computer usage and helps run the computer hardware. An operating system is a type of software that controls a computers output and input operations, such as saving files and managing memory. Common operating systems are typically Windows based, but personal computers can also use an Apple or Linux based operating system as well.

Application software is computer software designed to perform specific tasks. Common applications include word processing such as OpenOffice.org Writer, a spread sheet such as Microsoft Excel, and business accounting such as Quick Books by Intuit.

What is the difference between a PC (personal computer) and a workstation

In a business environment you may have a computer on your desk that is very similar to the computer you have at home, but there is one major difference, the work computer is managed as part of a LAN (local area network) that contains many other computers. In the next section we define networking terms and go into a bit more detail on the concept of a LAN.

Some definitions will state that a workstation computer is faster and more powerful than a personal computer. Not necessarily. Terms like "faster and more powerful" are pretty ambiguous. The difference is a bit more clear-cut, it is a point of reference in how they are used.

In your home you have a personal computer, it is the center of your personal technology universe. When you open up an application, it is on that computer. When you create a data file, like a Word document, you save it to that computer.

When you open up an application, it may be installed on your local computer, or it may be installed on an application server somewhere on your LAN. When you create a data file on your workstation, like a Word document, you save it to your personal directory on a file server that is on your LAN.

Many years ago when computer systems were expensive, all the work was done on a mainframe, a huge computer surrounded by geeks in a special room. The end users had dumb terminals, meaning there was a keyboard and a monitor at your desk, but the box they attached to on your desk was called a dumb terminal because it did not do any work, it was dumb!

The concept of the workstation is that some of the "work" is done locally at your desktop, but some of the work could also be done on a computer somewhere else, in the case of the LAN, that somewhere else would be a server.

Tags: 

The Data Link Layer of the OSI model

ComputerGuru -

The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking.  The Data Link layer deals with issues on a single segment of the network.

Layer two of the OSI model is one area where the difference between the theoretical OSI reference model and the implementation of TCP/IP with the competing Department of Defense (DoD) model. As we will discuss with the implementation of TCP/IP there is one lower layer called the network interface layer that encompasses Ethernet.

The IEEE 802 standards map to the lower two layers (Data Link and Physical) of the seven-layer OSI networking reference model. Even though we discussed many of these Ethernet terms in discussing the Physical Layer of the OSI model, we also discuss them here in the context of the Data Link Layer.

The IEEE 802 LAN/MAN Standards Committee develops Local Area Network standards and Metropolitan Area Network standards. In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). IEEE 802 splits the OSI Data Link Layer into two sub-layers named Logical Link Control (LLC) and Media Access Control (MAC).

The lower sub-layer of the Data Link layer, the Media Access Control (MAC), performs Data Link layer functions related to the Physical layer, such as controlling access and encoding data into a valid signaling format.

The upper sub-layer of the Data Link layer, the Logical Link Control (LLC), performs Data Link layer functions related to the Network layer, such as providing and maintaining the link to the network.

The MAC and LLC sub-layers work in tandem to create a complete frame. The portion of the frame for which LLC is responsible is called a Protocol Data Unit (LLC PDU or PDU).

IEEE 802.2 defines the Logical Link Control (LLC) standard that performs functions in the upper portion of the Data Link layer, such as flow control and management of connection errors.

LLC supports the following three types of connections for transmitting data:
• Unacknowledged connectionless service:does not perform reliability checks or maintain a connection, very fast, most commonly used
• Connection oriented service. Once the connection is established, blocks of data can be transferred between nodes until one of the nodes terminates the connection.
• Acknowledged connectionless service provides a mechanism through which individual frames can be acknowledged

IEEE 802.3 is an extension of the original Ethernet. includes modifications to the classic Ethernet data packet structure.

The Media Access Control (MAC) sub-layer contains methods that logical topologies can use to regulate the timing of data signals and eliminate collisions.

The MAC address concerns a device's actual physical address, which is usually designated by the hardware manufacturer. Every device on the network must have a unique MAC address to ensure proper transmission and reception of data.  MAC communicates with adapter card.

Carrier Sense Multiple Access / Collision Detection is (CSMA/CD) a set of rules determining how network devices respond when two devices attempt to use a data channel simultaneously (called a collision). Standard Ethernet networks use CSMA/CD. This standard enables devices to detect a  collision.

After detecting a collision, a device waits a random delay time and then attempts to re-transmit the message. If the device detects a collision again, it waits twice as long to try to re-transmit the message. This is known as exponential back off.

IEEE 802.5 uses token passing to control access to the medium. IBM Token Ring is essentially a subset of IEEE 802.5.

The IEEE 802.11 specifications are wireless standards that specify an "over-the-air" interface between a wireless client and a base station or access point, as well as among wireless clients. The 802.11 standards can be compared to the IEEE 802.3™ standard for Ethernet for wired LANs. The IEEE 802.11 specifications address both the Physical (PHY) and Media Access Control (MAC) layers and are tailored to resolve compatibility issues between manufacturers of Wireless LAN equipment

The IEEE 802.15 Working Group provides, in the IEEE 802 family, standards for low-complexity and low-power consumption wireless connectivity.

IEEE 802.16 specifications support the development of fixed broadband wireless access systems to enable rapid worldwide deployment of innovative, cost-effective and interoperable multi-vendor broadband wireless access products.

A network interface controller (NIC), also known as a network interface card or network adapter, implements communications using a specific physical layer and data link layer standard such as Ethernet. The 1990s Ethernet network interface controller shown in the photo has a BNC connector (left) and an 8P8C connector (right).

 

Tags: 

Physical Layer Topology in computer networking

ComputerGuru -

A network topology refers to the layout of the transmission medium and devices on a network. As a networking professional for many years I can honestly say about the only time network topology has come up is for certification testing. Here are some basic definitions.

Physical Topology:

Physical topology defines the cable's actual physical configuration (star, bus, mesh, ring, cellular, hybrid).

Bus: Uses a single main bus cable, sometimes called a backbone, to transmit data. Workstations and other network devices tap directly into the backbone by using drop cables that are connected to the backbone.  This topology is an old one and essentially has each of the computers on the network daisy-chained to each other. This type of network is usually peer to peer and uses Thinnet(10base2) cabling. It is configured by connecting a "T-connector" to the network adapter and then connecting cables to the T-connectors on the computers on the right and left. At both ends of the chain the network must be terminated with a 50 ohm impedance terminator.

Advantages: Cheap, simple to set up.
Disadvantages Excess network traffic, a failure may affect many users, Problems are difficult to troubleshoot.

Star: Branches out via drop cables from a central hub (also called a multiport repeater or concentrator) to each workstation. A signal is transmitted from a workstation up the drop cable to the hub. The hub then transmits the signal to other networked workstations.  The star is probably the most commonly used topology today. It uses twisted pair such as 10baseT or 100baseT cabling and requires that all devices are connected to a hub.

Advantages: centralized monitoring, failures do not affect others unless it is the hub, easy to modify.
Disadvantages If the hub fails then everything connected to it is down.

Ring: Connects workstations in a continuous loop. Workstations relay signals around the loop in round-robin fashion.  The ring topology looks the same as the star, except that it uses special hubs and ethernet adapters. The Ring topology is used with Token Ring networks, (a proprietary IBM System).

Advantages: Equal access.
Disadvantages Difficult to troubleshoot, network changes affect many users, failure affects many users.

Mesh: Provides each device with a point-to-point connection to every other device in the network.  Mesh topologies are combinations of the above and are common on very large networks. For example, a star bus network has hubs connected in a row(like a bus network) and has computers connected to each hub.

Cellular: Refers to a geographic area, divided into cells, combining a wireless structure with point-to-point and multipoint design for device attachment.

Logical Topology:

Logical topology defines the network path that a signal follows (ring or bus), regardless of its physical design.

Ring: Generates and sends the signal on a one-way path, usually counterclockwise.

Bus: Generates and sends the signal to all network devices.


LAN Media-Access Methods

Media contention occurs when two or more network devices have data to send at the same time. Because multiple devices cannot talk on the network simultaneously, some type of method must be used to allow one device access to the network media at a time. This is done in two main ways: carrier sense multiple access collision detect (CSMA/CD) and token passing.

In token-passing networks such as Token Ring and FDDI, a special network frame called a token is passed around the network from device to device.

For CSMA/CD networks, switches segment the network into multiple collision domains
 

Tags: 

The Internet Family of Protocols The TCP/IP protocol suite

ComputerGuru -

The Internet protocol suite commonly known as TCP/IP is a set of communications protocols used for the Internet and similar networks. TCP/IP is not a single protocol, but rather an entire family of protocols.

The network concept of protocols establishes a set of rules for each system to speak the others language in order for them to communicate. Protocols describe both the format that a message must take as well as the way in which messages are exchanged between computers.

Transmission Control Protocol (TCP) and the Internet Protocol (IP), were the first two members of the family to be defined, consider them the parents of the family. Protocol stack describes a layered set of protocols working together to provide a set of network functions. Each protocol/layer services the layer above by using the layer below.


Internet Protocol (IP)

Internet Protocol (IP) envelopes and addresses the data, enables the network to read the envelope and forward the data to its destination and defines how much data can fit in a single packet. IP is responsible for routing of packets between computers.

Internet Protocol (IP) is a connectionless, unreliable datagram protocol, which means that a session is not created before sending data. An IP packet might be lost, delivered out of sequence, duplicated, or delayed. IP does not attempt to recover from these types of errors. The acknowledgment of packets delivered and the recovery of lost packets is the responsibility of a higher-layer protocol, such as TCP.

An IP packet, also known as an IP datagram, consists of an IP header and an IP payload. The IP header contains the following fields for addressing and routing: IP header field, Source IP address of the original source of the IP datagram, and the Destination IP address of the final destination of the IP datagram.

Time-to-Live (TTL) Designates the number of network segments on which the datagram is allowed to travel before being discarded by a router. The TTL is set by the sending host and is used to prevent packets from endlessly circulating on an IP internetwork. When forwarding an IP packet, routers are required to decrease the TTL by at least 1.

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) breaks data up into packets that the network can handle efficiently, verifies that all the packets arrive at their destination, and reassembles the data. TCP is based on point-to-point communication between two network hosts. TCP receives data from programs and processes this data as a stream of bytes. Bytes are grouped into segments that TCP then numbers and sequences for delivery.

Transmission Control Protocol (TCP) is connection oriented, which means an acknowledgment (ACK) verifies that the host has received each segment of the message, reliable delivery service. Acknowledgments are sent by receiving computer, unacknowledged packets are resent. Sequence number are used with acknowledgments to track successful packet transfer

Before two TCP hosts can exchange data, they must first establish a session with each other. A TCP session is initialized through a process known as a three-way handshake. This process synchronizes sequence numbers and provides control information that is needed to establish a virtual connection between both hosts.

Once the initial three-way handshake completes, segments are sent and acknowledged in a sequential manner between both the sending and receiving hosts. A similar handshake process is used by TCP before closing a connection to verify that both hosts are finished sending and receiving all data.

TCP ports use a specific program port for delivery of data sent by using Transmission Control Protocol (TCP). TCP ports are more complex and operate differently from UDP ports.

While a UDP port operates as a single message queue and the network endpoint for UDP-based communication, the final endpoint for all TCP communication is a unique connection. Each TCP connection is uniquely identified by dual endpoints.

Comparison between the OSI and TCP/IP Models

TCP/IP Model Layer 4. Application Layer

Application layer is the top most layer of four layer TCP/IP model. Application layer is present on the top of the Transport layer. Application layer defines TCP/IP application protocols and how host programs interface with Transport layer services to use the network.

Application layer includes all the higher-level protocols:

  • DNS (Domain Naming System)
  • HTTP (Hypertext Transfer Protocol) is the protocol used to transport web pages.
  • FTP (File Transfer Protocol) used to upload and download files.
  • TFTP (Trivial File Transfer Protocol) used to upload and download files.
  • SNMP (Simple Network Management Protocol) designed to enable the analysis and troubleshooting of network hardware. For example, SNMP enables you to monitor workstations, servers, minicomputers, and mainframes, as well as connectivity devices such as bridges, routers, gateways, and wiring concentrators.
  • SMTP (Simple Mail Transfer Protocol) used for transferring email across the internet
  • DHCP (Dynamic Host Configuration Protocol) used to centrally administer the assignment of IP addresses, as well as other configuration information such as subnet masks and the address of the default gateway. When you use DHCP on a TCP/IP network, IP addresses are assigned to clients dynamically instead of manually.
  • X Windows, Telnet, SSH, RDP (Remote Desktop Protocol)
     

TCP/IP Model Layer 3. Transport Layer

Transport Layer is the third layer of the four layer TCP/IP model. The position of the Transport layer is between Application layer and Internet layer. The purpose of Transport layer is to permit devices on the source and destination hosts to carry on a conversation. Transport layer defines the level of service and status of the connection used when transporting data.

The main protocols included at Transport layer are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

TCP/IP Model Layer 2. Internet Layer

Internet Layer is the second layer of the four layer TCP/IP model. The position of Internet layer is between Network Access Layer and Transport layer. Internet layer pack data into data packets known as IP datagrams, which contain source and destination address (logical address or IP address) information that is used to forward the datagrams between hosts and across networks. The Internet layer is also responsible for routing of IP datagrams.

Packet switching network depends upon a connectionless internetwork layer. This layer is known as Internet layer. Its job is to allow hosts to insert packets into any network and have them to deliver independently to the destination. At the destination side data packets may appear in a different order than they were sent. It is the job of the higher layers to rearrange them in order to deliver them to proper network applications operating at the Application layer.

The main protocols included at Internet layer are IP (Internet Protocol), ICMP (Internet Control Message Protocol), ARP (Address Resolution Protocol), RARP (Reverse Address Resolution Protocol) and IGMP (Internet Group Management Protocol).

Reverse Address Resolution Protocol (RARP) adapted from the ARP protocol and provides reverse functionality. It determines a software address from a hardware (or MAC) address. A diskless workstation uses this protocol during bootup to determine its IP address.

Address Resolution Protocol (ARP) translates a host's software address to a hardware (or MAC) address (the node address that is set on the network interface card).

Internet Control Message Protocol (ICMP) enables systems on a TCP/IP network to share status and error information such as with the use of PING and TRACERT utilities.

TCP/IP Model Layer 1. Network Access Layer

Network Access Layer is the first layer of the four layer TCP/IP model. Network Access Layer defines details of how data is physically sent through the network, including how bits are electrically or optically signaled by hardware devices that interface directly with a network medium, such as coaxial cable, optical fiber, or twisted pair copper wire.

The protocols included in Network Access Layer are Ethernet, Token Ring, FDDI, X.25, Frame Relay etc.

The most popular LAN architecture among those listed above is Ethernet. Ethernet uses an Access Method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection) to access the media, when Ethernet operates in a shared media. An Access Method determines how a host will place data on the medium.

IN CSMA/CD Access Method, every host has equal access to the medium and can place data on the wire when the wire is free from network traffic. When a host wants to place data on the wire, it will check the wire to find whether another host is already using the medium. If there is traffic already in the medium, the host will wait and if there is no traffic, it will place the data in the medium. But, if two systems place data on the medium at the same instance, they will collide with each other, destroying the data. If the data is destroyed during transmission, the data will need to be retransmitted. After collision, each host will wait for a small interval of time and again the data will be retransmitted.
 

Tags: 

The Physical Layer of the OSI model

ComputerGuru -

The Physical Layer consists of the basic hardware transmission technologies of a network sometime referred to as the physical media. Physical media provides the electro-mechanical interface through which data moves among devices on the network.

Initially physical media is thought of as some sort of wire. As technology progresses the types of media grows.

Bounded media transmits signals by sending electricity or light over a cable. Unbounded media transmits data without the benefit of a conduit-it might transmit data through open air, water, or even a vacuum. Simply put, media is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or radio spectrum technology.

Data communications definitions:

Public Switched Telephone Network (PSTN), also referred to as Plain Old Telephone Service (POTS), connections run over the standard copper phone lines found in most homes

Integrated Services Digital Network (ISDN) uses a single wire or fiber optic line to carry voice, data, and video signals.

In the early days of connecting your computer to the internet most folks had Public Switched Telephone Network (PSTN), also referred to as Plain Old Telephone Service (POTS), and all connections were run over the standard copper phone lines. In order for the digital world of computers to talk over analog phone lines you needed to use a MODEM.

The term MODEM comes from the words modulator and demodulator, it is a device that modulates a carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data.

Modem standards, or V dot modem standards, are defined by the ITU (International Telecommunications Union). The FCC has limited the speed of analog transmissions to 53 Kbps

Basic Rate Interface (BRI) is most commonly used in residential ISDN connections. It's composed of two bearer (B) channels at 64 Kbps each for a total of 128 Kbps (used for voice and data) and one delta (D) channel at 16 Kbps (used for controlling the B channels and signal transmission). The total bandwidth is up to 144 Kbps.

Primary Rate Interface (PRI) is most commonly used between a PBX (Private Branch Exchange) at the customer's site and the central office of the phone company. It is composed of 23 B channels at 64 Kbps and one D channel at 64 Kbps. The total bandwidth is up to 1,536 Kbps.

Digital Subscriber Line (DSL) technologies use existing, regular copper phone lines to transmit data. DSL hardware can transmit data using three channels over the same wire. In a typical set up, a user connected through a DSL hookup can send data at 640 Kbps, receive data at 1.5 Mbps, and still carry on a standard phone conversion over one line.

T-Carrier Technology is a digital transmission service used to create point-to-point private networks and to establish direct connections to Internet Service Providers. It uses four wires, one pair to transmit and another to receive.

T-1 lines support data transfer at rates of 1.544 megabits per second. Each T-1 line contains 24 channels. The E1 line is the European counterpart that transmits data at 2.048 Mbps.

T-3 has 672 (64 Kbps) channels, for a total data rate of 44.736 Mbps. The E3 line is the European counterpart that transmits data at 34.368 Mbps.

Cable connections provide access to the Internet through the same coaxial cable that brings cable TV into your home. A signal splitter installed by the cable company isolates the Internet signals from the TV signals. The two-way cable connection is always available and can be very fast. Speeds up to 30 Mbps are claimed to be possible, although speeds in the 1 to 2 Mbps range are more typical.

The Physical Layer Ethernet Specifications

Ethernet is a family of computer networking technologies for local area (LAN) and larger networks originally developed at Xerox PARC in the 1970s. Robert Metcalfe, one of the inventors of Ethernet, left Xerox PARC in 1979 to create 3Com Corporation to focus on deploying Ethernet technology.

In 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). The IEEE 802 standards map to the lower two layers (Data Link and Physical) of the seven-layer OSI networking reference model. IEEE 802.3 is a working group and a collection of IEEE standards focusing on wired Ethernet.

Twisted-pair Ethernet cable has the following specifications: a maximum of 1,024 attached workstations, a maximum of 4 repeaters between communicating workstations, a maximum segment length of 328 feet (100 meters).

100BASE-TX specification uses two pairs of Category 5 UTP or Category 1 STP cabling at a 100 Mbps data transmission speed. Each segment can be up to 100 meters long.

100BASE-T4 specification uses four pairs of Category 3, 4, or 5 UTP cabling at a 100 Mbps data transmission speed with standard RJ-45 connectors. Each segment can be up to 100 meters long.

Fiber optic cable (IEEE 802.8) in which the center core, a glass cladding composed of varying layers of reflective glass, refracts light back into the core. Max length is 25 kilometers, speed is up to 2Gbps but very expensive. Best used for a backbone due to cost.

100BASE-FX specification uses two-strand 62.5/125 micron multi- or single-mode fiber media. Half-duplex, multi-mode fiber media has a maximum segment length of 412 meters. Full-duplex, single-mode fiber media has a maximum segment length of 10,000 meters.

Other wired LAN technologies

Ethernet has largely replaced competing wired LAN technologies such as token ring, token bus, and ARCNET.

IEEE standard 802.4 defined Token bus. It was mainly used for industrial applications. Token bus was used by General Motors for their Manufacturing Automation Protocol (MAP) standardization effort. The IEEE 802.4 Working Group is disbanded and the standard has been withdrawn

Token ring was IBM’s protocol of choice, standardized as IEEE 802.5. Introduced by IBM in 1984,Token ring was fairly successful in corporate environments, but gradually lost out to Ethernet.

ARCNET was a very early LAN system, a token-passing bus with a 2.5 Mbit/sec speed, popular in the 1980s.

Wireless standards

The standards defining the physical layer of wired Ethernet are known as IEEE 802.3, which is part of a larger set of project 802 standards by the Institute of Electrical and Electronics Engineers Standards Association.

IEEE 802.11 WLAN 802.11 and 802.11x refers to a family of specifications developed by the IEEE for wireless LAN (WLAN) technology. 802.11 specifies an over-the-air interface between a wireless client and a base station or between two wireless clients.

IEE 802.15 defines Bluetooth Bluetooth,  a wireless technology standard for exchanging data over short distances (using short-wavelength UHF radio waves in the ISM band from 2.4 to 2.485 GHz[3]) from fixed and mobile devices, and building personal area networks (PANs).

IEEE 802.16 defines WIMAX standards for broadband for wireless metropolitan area networks. officially called WirelessMAN in IEEE, it has been commercialized under the name "WiMAX"

While in your world many of the older technologies of data communications may be replaced with modern one, there are many reasons why you may need to know about them. You may get a better understanding of how things are done on your current network if you understand the evolution of the network.

If your ever work in consulting you may be surprised to find out how much of what you call obsolete is still in use. You will also find questions on older technologies on various certification tests.

Tags: 

What is the difference between the Internet and OSI reference model

ComputerGuru -

When learning computer networking it is essential to have a general idea of the different computer networking reference models and the reasoning behind the layered approach. Both the TCP/IP network model and the OSI Model create a reference model for computer networking. The OSI model is widely used to teach students as was created in the mindset of a reference book. The TCP/IP standards were created to provide guidance to people actually implementing a networking technology and was created in the mindset of a service manual. Much like the answer to the question of why was the internet created, the answer to why do we need the OSI model depends on who you ask. Here at ComputerGuru.net try to explain the basics of the OSI model as it relates to understanding basic computer networking.

The Internet and the TCP/IP family of protocols evolved separately from the OSI model. Often you find teachers, and websites, making direct comparison of the different models. Don't get too hung up on drawing direct comparisons between the two models. Our discussion here on the two networking reference models is address some commonly asked questions, and give some historical perspective as to how the models have evolved.


The Open Systems Interconnection Reference Model (OSI Reference Model or OSI Model) was originally created as the basis for designing a universal set of protocols called the OSI Protocol Suite. This suite never achieved widespread success, but the model became a very useful tool for both education and development. The model defines a set of layers and a number of concepts for their use that make understanding networks easier. The theoretical OSI Reference Model is the creation of the European based International Organization for Standardization (ISO), an independent, non-governmental membership organization that creates standards in numerous areas of technology and industry. The OSI model was first published in 1984 as ISO 7498: Information processing systems -- Open Systems Interconnection -- Basic Reference Model.

The Internet model is often compared to the OSI model. This internet model has many names such as the DOD reference model or the ARPANET reference model, because like the internet itself the TCP/IP protocol suite has evolved over the years. The ARPANET was the original name of the network we now call the internet. ARPA, currently known as DARPA, the Defense Advanced Research Projects Agency, is funded by the DoD (Department of Defense).

Unlike the International Standards Organization (ISO) where there is one main library of information that maintains specific standards, the internet is an ever evolving network with many entities working together to maintain standards. There is a collection of documents known as Request for Comments (RFC) maintained by the Internet Engineering Task Force (IETF) that describes various technology specifications.

Simple talk and some needed geek speak

Since TCP/IP is the primary networking language of the internet, everyone who works in the field of technology needs to have at least a simple understanding of how it works and its role in the big picture of the internet. In the spirit of the Guru 42 family of websites, we attempt to tackle the basic understanding using as simple terms as possible.

To understand the role of TCP/IP in the big picture of the internet, we need to delve just a bit into the geek speak of the internet. If you want to learn more, and really delve into how the internet works and the interesting history of the internet, an understanding of IETF and RFC's.

What is an RFC?

The concept of Request for Comments (RFC) documents was started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications.

In computer network engineering, a Request for Comments (RFC) is a formal document published by the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), and the global community of computer network researchers, to establish Internet standards. The Internet Engineering Task Force (IETF) develops and promotes voluntary Internet standards,

The IETF started out as an activity supported by the U.S. federal government, but since 1993 it has operated as a standards development function under the auspices of the Internet Society, an international membership-based non-profit organization.

Which came first the Internet model or the ISO model?

A question often asked is which network reference model came first. Various sources state that the ground work for the Open Systems Interconnection model (OSI Model) started in the 1970s by a group at Honeywell Information Systems. Other sources point to two projects that began independently in the 1970s to define a unifying standard for the architecture of networking systems. One was administered by the International Organization for Standardization (ISO), and one by the International Telegraph and Telephone Consultative Committee (CCITT).

RFC 871 published in September 1982 is one of the first formal descriptions of the ARPANET Reference Model (ARM). The the introduction of RFC 871 addresses the history of the internet model versus the ISO model.

"Since well before ISO even took an interest in "networking", workers in the ARPA-sponsored research community have been going about their business of doing research and development in intercomputer networking with a particular frame of reference in mind."

Is there an official document that explains the ARPANET Reference Model (ARM)?

RFC 871 was published in September 1982 as a recollection of the past by one of the developers ARPANET Reference Model as the author describes "as a perspective on the ARM." The author points out that the ARPANET Network Working Group (NWG), which was the collective source of the ARM, hasn't had an official general meeting since October 1971.

The four layer internet was defined in Request for Comments 1122 and 1123. RFC 1122, published October 1989, covers the link layer, IP layer, and transport layer, and companion RFC 1123 covers the applications layer and support protocols

The TCP/IP Model is not merely a reduced version of the OSI Reference Model with a straight line comparison of the four layers of the TCP/IP model to seven layers of the OSI model. As you read through many of the RFC documents on the IETF protocol development you will see direct statements that they are not concerned with strict layering such as section 3 of RFC 3439 which is titled: "Layering Considered Harmful."

The links below to RFC 1958 and 3439 will help you understand the general mindset of the developers of TCP/IP. RFC 1122 and RFC 1123 are the definitions of the four protocol layers of the TCP/IP model. As the constantly growing library of RFCs illustrates, the concept of the TCP/IP is a ongoing evolution.

References:

Request for Comments (RFC) http://www.ietf.org/rfc.html

Memos in the Requests for Comments (RFC) document series contain technical and organizational notes about the Internet. The Internet Engineering Task Force (IETF) is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet.

RFC 871: September 1982 https://tools.ietf.org/html/rfc871

A perspective on the ARPANET REFERENCE MODEL
Abstract: The paper, by one of its developers, describes the conceptual framework in which the ARPANET intercomputer networking protocol suite, including the DoD standard Transmission Control Protocol (TCP) and Internet Protocol (IP), were designed.

RFC 1122: October 1989 https://tools.ietf.org/html/rfc1122
This RFC covers the communications protocol layers: link layer, IP layer, and transport layer;

RFC 1123: October 1989 https://tools.ietf.org/html/rfc1123
This RFC covers the applications layer and support protocols.

RFC 1958: June 1996 https://tools.ietf.org/html/rfc195
Architectural Principles of the Internet

RFC 3439: December 2002 https://tools.ietf.org/html/rfc3439
Internet Architectural Guidelines
Extends RFC 1958 by outlining some of the philosophical guidelines to which architects and designers of Internet backbone networks should adhere.


Links to learn more:

Check out our site Geek History where we discuss the evolution of the ARPANET and TCP/IP

Why was the internet created: 1957 Sputnik launches ARPA
http://geekhistory.com/content/why-was-internet-created-1957-sputnik-launches-arpa

When was internet invented: J.C.R. Licklider guides 1960s ARPA Vision
http://geekhistory.com/content/when-was-internet-invented-jcr-licklider-guides-1960s-arpa-vision

In the 1960s Paul Baran developed packet switching
http://geekhistory.com/content/1960s-paul-baran-developed-packet-switching

The 1980s internet protocols become universal language of computers
http://geekhistory.com/content/1980s-internet-protocols-become-universal-language-computers

Photo: Interface Message Processor (IMP) ARPANET packet routing

Tags: 

The OSI model explained in simple terms

ComputerGuru -

Learning technology isn't sexy, but I am doing my best to keep it interesting. Here I take on the complex subject of the Computer Networking OSI model explained in simple terms. In our previous article, Understanding the mystical OSI Model explained in simple terms we used an analogy to illustrate the OSI model.

Why is the OSI Reference Model important?

Simply put the OSI Reference Model is a THEORETICAL model describing a standard of computer networking. The TCP/IP Reference model is based on the ACTUAL standards of the internet which are defined in the collection of Request for Comments (RFC) documents started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications.

The OSI model is important because many certification tests use it to determine your understanding of computer networking concepts. The OSI Reference Model is an attempt to create a set of computer networking standards by the International Standards Organization. A "Reference Model" is a set of text book definitions. You often learn something new by first learning text book definitions. The common protocol suite of computer networking is TCP/IP. The geeks who created TCP/IP were not as anal in creating a pretty "reference model." TCP/IP evolved over many years as it went from a theory to the concept of the internet.


The Internet and the TCP/IP family of protocols evolved separately from the OSI model. Often you find teachers, and websites, making direct comparison of the different models. Don't spend too much time trying to compare one versus the other. The two models were developed independently of each other to describe the standards of computer networking.

The TCP/IP Reference Model is not merely a reduced version of the OSI Reference Model with a straight line comparison of the four layers of the TCP/IP model to seven layers of the OSI model. The TCP/IP Reference Model does NOT always line up neatly against the OSI model. People try to hard to make neat comparisons of one model versus the other when there is not always a neat one to one correlation of each aspect.


The stated purpose of the OSI Model:

  • breaks network communication into smaller, simpler parts that are easier to develop.
  • facilitates standardization of network components to allow multiple-vendor development and support.
  • allows different types of network hardware and software to communicate with each other.
  • prevents changes in one layer from affecting the other layers so that they can develop more quickly.
  • breaks network communication into smaller parts to make learning it easier to understand.


The seven Layers of the OSI Model

The hierarchical layering of protocols on a computer that forms the OSI model is known as a stack. A given layer in a stack sends commands to layers below it and services commands from layers above it.

The seven layers in order from highest to lowest are Application, Presentation, Session, Transport, Network, Data Link, and Physical can be remembered by using the following memory aide: All People Seem To Need Data Processing.

The Application layer includes network software that directly serves the user, providing such things as the user interface and application features. The Application layer is usually made available by using an Application Programmer Interface (API), or hooks, which are made available by the networking vendor.

The Presentation layer translates data to ensure that it is presented properly for the end user, also handles related issues such as data encryption and compression, and how data is structured, as in a database.

The Session layer comes into play primarily at the beginning and end of a transmission. At the beginning of the transmission, it makes known its intent to transmit. At the end of the transmission, the Session layer determines if the transmission was successful. This layer also manages errors that occur in the upper layers, such as a shortage of memory or disk space necessary to complete an operation, or printer errors.

The Transport layer provides the upper layers with a communication channel to the network. The Transport layer collects and reassembles any packets, organizing the segments for delivery and ensuring the reliability of data delivery by detecting and attempting to correct problems that occurred.

The Network layer's main purpose is to decide which physical path the information should follow from its source to its destination.

The Data Link layer provides a system through which network devices can share the communication channel. This function is called media-access control (MAC).

The Physical layer provides the electro-mechanical interface through which data moves among devices on the network.

In the articles that follow we will break down each layer in more detail, covering topics you will need to know as a networking professional.
 

Tags: 

Understanding the mystical OSI Model explained in simple terms

ComputerGuru -

As you begin your quest to learn computer networking one of the first tasks you have before you is a basic understanding of the OSI model.

For many folks understanding the OSI model is like trying to understand some mystical formula that controls the way computer networks operate.

As we help you to begin your journey to understanding computer networking We will tackle explaining the complex subject of the computer networking OSI model simple terms in hopes that you will gain an understanding of the reasons behind the definitions

You can find a lot of resources that define the components of the OSI model, but an understanding of the reasons behind the definitions will go a lot way to fully understanding this complex technology model.

The acronym and the organization behind it can get confusing. The formal name for the OSI model is the Open Systems Interconnection model. Open Systems refers to a cooperative effort to have development of hardware and software among many vendors that could be used together. The model is a product of the International Organization for Standardization (2) which is often abbreviated ISO.


The logic behind the OSI model

Before we delve into the OSI model, let us take a moment to understand the organization behind it. You may have seen the term ISO certified in various technology areas. ISO, International Organization for Standardization, (1) is the world's largest developer and publisher of International Standards. ISO helps to manage and create many international standards in many technical areas to insure the same quality of a product or process regardless of location or company.

The OSI (Open Systems Interconnection) model provides a set of general design guidelines for data communications systems and gives a standard way to describe how various layers of data communication systems interact. Applying the logic of the ISO standards to computer networking, a computer component, or computer software needs to comply to set of standards so that the product or process will work no matter where in the world we are, and no matter who is the world is producing it.

Putting the OSI model into perspective

Strive for a good understanding of the intent of the model and a few of the core principles, that will go a long way in an overall understanding of computer networking. Do not focus on the intricate details of the OSI model at first, as the more you read the more confused you may get. The model was created in the 1970s and the technology is ever changing. Many text books will contradict each other on some aspects of the upper layers. Some of the reasoning behind the upper layers are for processes that are not nearly as useful today as they were many years ago, and for that reason many other network models will blend together the upper three layers into a single layer.

Basic definitions of the OSI Model

The seven layers of the OSI Model can be remembered by using the following memory aide: All People Seem To Need Data Processing. As you say the phrase, write down the first letter of each word, and that will help you to remember the seven layers in order from highest to lowest: Application, Presentation, Session, Transport, Network, Data Link, and Physical. We will briefly discuss the lower four layers from the bottom up.

Layer one, the Physical layer provides the path through which data moves among devices on the network.

Layer two, the Data Link layer provides a system through which network devices can share the communication channel.

Layer three, the Network layer's main purpose is to decide which physical path the information should follow from its source to its destination.

Layer four, the Transport layer provides the upper layers with a communication channel to the network.

An analogy to understand the model

Some of reasons behind the OSI model are, to break network communication into smaller, simpler parts that are easier to develop and to facilitate standardization of network components to allow multiple vendor development and support.

Let's take the reasons behind the OSI model and apply them to something totally different to illustrate how they are used. If we wanted to start a railroad and build a new type of train from scratch, and we wanted this train to be able to use existing train tracks, and existing train stations so our new system could get up and running quickly, we would need to understand what existing standards are currently in place.

Even if we never had to build a set of train tracks we would need understand the standards by which train tracks were build and designed so we could assure our train could operate on them, and how the track is shared. Likewise, in order for components to operate, manufactures must understand the track, layer one, and how the track is shared, layer two.

If we are building trains, not train stations, we need to know the size and shape of other vehicles using the tracks so our trains could use the same track as all the other trains. Layer one of the OSI model gives us the path, or the track we use for communication. Layer one, referred to as the media, is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or radio spectrum technology.

Once you have more than one train on the track, you need to find a way to share the track. Layer two provides a system through which network devices can share the communication channel, or in the case of our analogy, share the track. One of the functions of layer two is called media access control (MAC). If you think about the term media access control you can break it down into the two parts it represents, the media or the track, and access control, or the sharing of the track.

In the OSI model layers one and two represent the the media, or the physical components. Layers three through seven represent the logical, or the software components.

In layer three of the OSI model, the Network later, the logical decision is made to decide which physical path the information should follow from its source to its destination.

In order to continue our analogy to understand this complex set of rules, think of the track system that has already been built as layers one and two. Once this track system is in place we need a system to control the routing of the train system that runs on the tracks. Think of layers three through seven as processes which affect the train itself, which would represent the actual package of information being transported along the tracks. The main purpose of layer three is switching and routing.

Layer four of the OSI model, the transport layer ensures the reliability of data delivery by detecting and attempting to correct problems that occurred. In terms of our analogy, think of this as a set of standards and procedures that allows our train to arrive safely at its destination in a timely manner.

Learning and understanding the OSI model can be confusing.. The goal of this article was not meant to define the layers of the OSI model from purely a technical nature, but to offer an analogy to understand why it is needed and how it used to establish standards for data communications. In our next article we will go over the basic definitions of all the layers of the OSI model.


Sources:
(1) http://www.iso.org/iso/about.htm
(2) http://www.iso.org/iso/home.html

Tags: 

Basic computer networking explained in simple terms

ComputerGuru -

Whether you are a business manager learning the language of technology to better communicate with IT staff, or just beginning your IT career, don't overlook a basic understanding of computer networking.

What is computer networking?

The simplest definition of a computer network is a group of computers that are able to communicate with one another and share a resource. A computer network is a collection of hardware and software that enables a group of devices to communicate and provides users with access to shared resources such as data, files, programs, and operations.

In simplest terms, a computer network is created to share. In teaching computer networking I often commented that if you find someone who didn't like to use the computer network, they probably had a personal issue with the concept of sharing.

We live in a world of data and information. We love to share data and information. All that data and information gets from my house to your house thanks to the concepts of computer networking. We need computer networking to build the vehicles that transport data and information.


Common networking terms

Each device on a network, is called a node. In order for communications to take place, you need the software, the network operating system (NOS) and the means of communication between network computers known as the media.

In computer networking the term media refers to the actual path over which an electrical signal travels as it moves from one component to another. The media can be physical such as a specialized cable or various forms of wireless media such as infrared transmission or radio signals.

A network interface card (NIC) enables two computers to send and receive data over the network media.

What is a protocol?

A Network protocol is a agreed upon set of rules that define how network computers communicate . Different types of computers, using different operating systems, can communicate with each other, and share information as long as they follow the network protocols.

The Internet protocol suite commonly known as TCP/IP is a set of communications protocols used for the Internet and similar networks. You will often see the terms protocol suite or protocol stack used interchangeably. The protocol stack is an implementation of a computer networking protocol suite.

What is a LAN (Local Area Network) versus a WAN (Wide Area Network)?

In a typical LAN (Local Area Network) a group of computers and devices are connected together by a switch, or stack of switches, using a private addressing scheme as defined by the TCP/IP protocol. You may not be familiar with the specific function of a network switch or the definitions of private addressing scheme, they are more advanced topics of computer networking.

Private addresses are unique in relation to other computers on the local network. Routers are found at the boundary of a LAN, connecting them to the larger WAN.

In a WAN (Wide Area Network) you will have multiple LANs connected together using routers. I was taught many years ago that a WAN had nothing to do with the size of a computer network, but was simply connecting multiple LANs together across the public highway system, such as the internet.

People often try to explain concepts like LAN and WAN using terms and descriptions that have nothing to do with the definition. I often see people put numbers of computers into their definitions of LAN and WAN. If you have a three computer LAN than uses the public highway, as in the internet and internet addressing, to connect to another three computer network, the two LANs working together form a WAN.

You may not be familiar with the specific function of a network switch versus a router, or the definitions of private addressing scheme versus a public address, they are more advanced topics of computer networking, but they are the core elements that separate a LAN from a WAN.

What is the client server network model?

In the most common network model, client server, at least one centralized server manages shared resources and security for the other network users and computers. A network connection is only made when information needs to be accessed by a user. This lack of a continuous network connection provides network efficiency.

The client requests services or information from the server computer. The server responds to the client's request by sending the results of the request back to the client computer.

Security and permissions can be managed by administrators which cuts down on security and rights issues when dealing with a large number of workstations. This model allows for convenient backup services, reduces network traffic and provides a host of other services that come with the network operating system.

What are Peer-to-Peer Networks?

Simply sharing resources between computers, such as on a typical home network, every computer acts as both a client and a server. Any computer can share resources with another, and any computer can use the resources of another, given proper access rights.

This is a good solution when there are 10 or less users that are in close proximity to each other, but it is difficult to maintain security as the network grows. This model can be a security nightmare, because each workstation setting permissions for shared resources must be maintained at the workstation, and there is no centralized management. This model is only recommended in situations where security is not an issue.

Other Network Models

Before microcomputers because cost effective dumb terminals were used to access very large main frame computes in remote locations. The local terminal was dumb in the sense that it was nothing more than a way for a keyboard and monitor to access another computer remotely with all the processing occurred on the remote computer. This model, sometimes referred to as a centralized model, is not very common.


The all encompassing footnote

A LAN could use something other than a TCP/IP addressing scheme, but the illustration of a LAN and WAN based network as I describe is a typical implementation

These definitions were written off of the top of my head based on many years of networking experience. Any resemblance to Wiki or any other website is merely coincidental. (Since I am defining basic terms I would hope that they are at least similar!)

Our goal is geek speak made simple. I realize that I may have oversimplified some terms, but the goal here at Computerguru.net to deliver a basic understanding of the concepts in simple terms and not deliver a lecture on computer networking fundamentals to define each term.  I see many answers on various forums that over complicate matters as well as add quite a bit of stray information.

Tags: 

Basic network concepts and the OSI model explained in simple terms

ComputerGuru -

In this chapter of the journey to learn computer networking technology we explain the OSI Reference Model in simple terms, and expand on the different layers of the OSI model.

The OSI model defines the basic building blocks of computer networking, and is an essential part of a complete understanding of modern TCI/IP networks. The theoretical OSI Reference Model is the creation of the European based International Organization for Standardization (ISO), an independent, non-governmental membership organization that creates standards in numerous areas of technology and industry.

Why is the OSI Reference Model important?

An understanding of the concepts of the OSI Reference Model is absolutely necessary for someone learning the role of the Network Administrator or the System Administrator. The OSI model is important because many certification tests use it to determine your understanding of computer networking concepts.

The Open Systems Interconnection Reference Model (OSI Reference Model or OSI Model) was originally created as the basis for designing a universal set of protocols called the OSI Protocol Suite. This suite never achieved widespread success, but the model became a very useful tool for both education and development. The model defines a set of layers and a number of concepts for their use that make understanding networks easier.

The Internet and the TCP/IP family of protocols evolved separately from the OSI model. Often you find teachers, and websites, making direct comparison of the different models. Don't spend too much time trying to compare one versus the other. The two models were developed independently of each other to describe the standards of computer networking.

The TCP/IP Reference Model is not merely a reduced version of the OSI Reference Model with a straight line comparison of the four layers of the TCP/IP model to seven layers of the OSI model. The TCP/IP Reference Model does NOT always line up neatly against the OSI model. People try too hard to make neat comparisons of one model versus the other when there is not always a neat one to one correlation of each aspect.

Simply put the OSI Reference Model is a THEORETICAL model describing a standard of computer networking. The TCP/IP Reference model is based on the ACTUAL standards of the internet which are defined in the collection of Request for Comments (RFC) documents started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications, as discussed in the article What is the difference between the Internet and OSI reference model.

To learn more about the evolution of the TCP/IP model check out the Geek History article: The 1980s internet protocols become universal language of computers

If are looking for something less technical that focuses more on using a computer network, rather than understand the core concepts of how it works, please visit our companion website The Guru 42 Universe where we discuss managing technology from the perspective of a business owner or department manager.

Check out the section Business success beyond great ideas and good intentions and specifically the article The System Administrator and successful technology integration.


The role of the Network Administrator or the System Administrator

On a small to mid size network there may be little, if any, distinction between a Systems Administrator and a Network Administrator, and the tasks may all be the responsibility of a single post. As the size of the network grows, the distinction between the areas will become more well defined.

In larger organizations the administrator level technology personnel typically are not the first line of support that works with end users, but rather only work on break and fix issues that could not be resolved at the lower levels.

Network administrators are responsible for making sure computer hardware and the network infrastructure itself is maintained properly. The term network monitoring describes the use of a system that constantly monitors a computer network for slow or failing components and that notifies the network administrator in case of outages via email, pager or other alarms.

The typical Systems Administrator, or sysadmin, leans towards the software and NOS (Network Operating System) side of things. Systems Administrators install software releases, upgrades, and patches, resolve software related problems and performs system backups and recovery.

What is the difference between networking and telecommunications?

In a large organization the distinction of telecommunications and networking can vary depending how the organization is structured. I've worked in smaller companies where anything technology related came under the responsibility of the IT (information technology) department. In larger organizations the roles get a bit more defined and separated. For instance, in a large organization someone specializing in telecommunications may have little or no role in understanding computer servers and network operating systems.

I am answering this from my very personal perspective. I began working in the 1970s in telecommunications. In the military that meant I installed and repaired radio communications and telephone equipment. In the commercial world I had an FCC (Federal Communications License) which allowed me to work on radio communications equipment.

In the 1990s I began working in computer networking, which would be IT (information technology). I see the distinction there as information is data driven. My responsibilities are computer servers and network operating systems. The basic premise of a computer network is to share a resource. The device which allows the resource top be shared is a server. For instance, a print server allows a printer to be shared, a file server allows files to be shared.

In my current position my title includes "telecommunications and networking." My telecommunications responsibilities include telephones. Now with IP (internet protocol) based phones, you have the questions of, is it a phone system problem, or a network problem. The separation was a lot "cleaner" as far as responsibilities before IP based phones. My telecommunications responsibilities also include things like the internal network wiring and dealing with the external issues regarding the connectivity to the building. My networking (IT) responsibilities are the maintenance of the computer servers and the network operating systems that allow resources to be shared.

Tags: 

What is the best desktop computer operating system?

ComputerGuru -

There is no one size fits all answer to " what is the best desktop computer operating system?" Let me first tackle the differences between Linux, Microsoft, and Apple. Hopefully the tech purists won't beat me up too much for generalizing here.

The arguments of which operating system (OS) is best often focuses on the GUI (graphical user interface). Apple focused on being graphical from the start, and Apple focused on a creating single poweruser desktop computer. They have created their own very successful world.

I work in the world of enterprise computers, that's where many computers are talking together, working together, on local area networks (LANs) and wide area networks (WANs). Some might say I have gone over to the dark side and become a Microsoft fan boy. I bashed Microsoft quite a bit over the years for inefficient operating systems. After spending more than 20 years working with Microsoft products in the enterprise environment I have come to appreciate Microsoft and all the technology they have created.

Linux is a Unix-like computer operating system. When I was teaching I always remember a line from a song when I described Unix, "It wasn't build for comfort it was built for speed." Command line functions, the non GUI stuff, is important to the people who use Unix. A lot of Linux, like Unix, is used by people running it on servers, they don't care about the GUI. That's why there are so many distributions of Linux, some are geared to people using it mainly for server based applications, and some Linux Distros focus on a pretty GUI. Distro is a shortened version of the term distribution. We will discuss popular Linux distros in our next article.

The Linux kernel

Let me use the analogy of building an automobile and say that the operating system kernel is like the engine and drive train of the vehicle. Some people argue the case for Linux based on the assumption that the Linux kernel offers the best engine and drive train to power our computer. That depends, the best for what purpose?

The question often comes up as to why doesn't Windows or Apple create services and applications and applications that work with Linux.

From a programming perspective Microsoft has spent billions of dollars creating services and applications that run on their kernel. What incentive would they have to start creating services and applications specific to a Linux kernel?

Apple seems pretty happy pumping out smartphones, some Apple fans are sad that Apple now appears more focused on phones rather than computers. Apple is the most profitable company on the planet. Why would they start creating services and applications specific to a Linux kernel?

You can't make money on Linux?

There are answers that suggest Apple or Microsoft could not make money supporting Linux. Some people don't understand the concept of open source and believe you can't make money by supporting it.

Richard Stallman, the father of the Open Source software movement, explains that Open Source refers to the preservation of the freedoms to use, study, distribute and modify that software not zero-cost. In illustrating the concept of Gratis versus Libre, Stallman is famous for using the sentence, "free as in free speech not as in free beer."

As Google has shown with Android you can straddle the fence successfully between supporting an open source operating system while still maintaining a fair amount of proprietary components.

As far as Microsoft supporting Linux, in case you missed it: What do you think of Microsoft joining the Linux Foundation?

"The best GUI"

If we get beyond the argument of why the Linux kernel is the best, the question assumes that we need a Windows or Apple GUI (graphic user interface) to make "make the best OS."

There are many impressive looking GUI’s in the Linux world. Take a look at all the Linux distributions, some distros have focused on the server geeks and server functions, some have focused on looking good with pretty GUIs for the desktop crowd. For instance, Mint is a fork from Ubuntu, which is itself a fork from Debian. Mint was forked off Ubuntu with the goal of providing a familiar desktop GUI.

It's funny how questions on forums often start with "Why is Microsoft Windows so popular?" and then go on to give reasons why it shouldn't be so popular. Microsoft is popular, that is the reality. The reasons of why it shouldn't be so popular are typical perceptions of Linux users looking to stir up a debate.

Desktop computers and personal computers starting entering homes and offices in the 1980s. The world of what we then called "IBM compatible" was driven by computers that were command line operating systems. That meant you had to type in command, short words and ,to get your computer to perform various tasks. People came up with various menus and interfaces, but the desktop was not very graphical.

The mid 1990s was the perfect storm for Microsoft Windows 95. The world was just discovering the internet as online services began connecting to the internet for the first time. Microsoft began marketing Windows 95 as the Graphical User Interface to the desktop computer, and the graphical world wide web with Internet Explorer. Love them or hate them, Microsoft became the dominant desktop computer that people used in their homes, and connected to the web in the 1990s.

It is that Windows has become the predominant desktop computer operating system in the 1990s, in offices, and schools, that people have little reason to use something different at home. In order to get people to change the differences must be totally seamless.

Many Linux fans will say that Linux has become much easier to use, and the interface much more like Microsoft Windows. Many Linux users will call Windows too complicated and that switching over to Linux is easy. That is a matter of perspective. I have been supporting desktop computers for more than 30 years, I know first hand how people hate change. Give any windows user a different operating system and they will call it complicated, because it is different. When Blackberry's went out of style and people were forced to use Apples and Androids, I heard users complain about how they missed how easy their Blackberry was to use. It was easy because that was what they learned on, and now they were forced to change.

I keep hearing about how all the cool Linux distros are faster, sleeker, better, than Windows, but there has yet to be a computer company that has mass produced a desktop computer with a Linux distro. The closest thing to a home use Linux based computer is the Google Chromebook. I have a Linux computer at home, but it is just a web browser and email reader. Sure there are a few games on it as well. But there are too many applications I use at work that I could never bring home because they won't run on a Linux computer.

I am by no means a Microsoft Fanboy. Over the years I have had strong words for how Microsoft has done things, but in recent years I finding myself defending Microsoft because some of the negativity gets pretty silly at times. I am not going to force myself and my family to use a Linux computer just to prove a point. I don't see myself going down that road anytime soon.o

My perspective is also a bit different that the average home user, I am a systems admin. I need to worry about how well multiple computers play together with multiple users. The computer could be use as a toy, or a set of tools, what works best for you depends on what applications you need to do the job. There is no one size fits all answer to which computer should you use.

Tags: 

Common technology questions and basic computer concepts

ComputerGuru -

In this section we are covering common questions and basic computer concepts from the perspective of a typical home user. The first question is obviously, "What computer should I buy?"

Anyone who answers your question "What computer should I buy?" without first asking a few questions back, does not understand the question.

How much computer do you need?

Too often people set out shopping for a computer without first making a list of what they expect the computer to do for them. This is the most common reason for unfulfilled expectations when it comes to technology.

Technology is ever changing, at a very rapid pace. Depending on your level of technical knowledge, expectations of what technology can do will vary widely. Even those who have been around technology for years will sometime make the most common of errors by buying individual devices, without planning how they fit into the total picture. In business today you hear a lot about the thirty thousand foot view. It's all about looking at the total picture, rather than any one thing.

Never lose sight of the fact that technology is just a tool. The finest tools do not turn a novice craftsman into a master. Your financial adviser will tell you the importance of sound financial planning, so if so if you view a computer as a tool to automate your life, it makes sense to plan your technology purchases. Planning involves some work, but all you need to get started is a pencil and paper.

Starting on a piece of paper, write down your thoughts on a few basic questions. What is in it for me, what benefits do you expect from the system? If you could have anything, what would it be? What would you like to have available to you?


What brand to buy? And where do you buy it?

If you think of a computer as a tool, to organize your life, or increase your productivity, then where you buy your computer should be more of an ongoing relationship, rather than a one time occurrence.

The best analogy I ever heard on defining value: if you knew you had to jump out of a plane, where would you buy a parachute? Someone who'd been in the business for awhile might be able to help. I know I'd try to find a place that specialized in parachutes. I know I wouldn't trust buying it from the Cheapo-mart.

Speaking strictly from the viewpoint of Windows computers, I stick with the major name brands like HP and Lenovo. If you sign up for their mailing list on the HP and Lenovo websites they will bombard you with sales, but often have very good deals.

I stay away from the no-name brands, and the low end stuff. I have years of experience on how the cheap stuff doesn't hold up.

Where do you go from here?

In this section we are covering concepts from the perspective of a typical home user. In addressing the question what computer should you buy the topic that enters into the mix is which operating system is the best, so next up we will tackle the question of "what is the best desktop computer operating system?"  If you feel ready to try out the Linux operating system, we will also discuss the various flavors of Linux, and why there are so many different distributions.

On computer basics we will go over the definitions of a computer system hardware and look at the cables and connections you will need for your home network.   If you want to learn more, the sections that follow will go into desktop computer troubleshooting and computer networking concepts.

The section on basic network concepts and the OSI model explained in simple terms is a bit beyond what the average home computer needs to know. This section will be helpful for someone learning computer network and looking at some basic certifications as a network technician.

Many of the articles written for this website were written many years ago for various classes I taught at local community colleges. I periodically go through the site changing things based on common questions I see being asked on online foruns. While this site gets revised from time to time, we purposely try not put anything in here which would age quickly such as current events topics. Many of the basic technology concepts do not age over time as much as you would guess.

If you are not sure what is the best technology choice for you, and you need some ideas, or if you want to keep up to date on hot topics in technology, check out the Guru 42 small business and technology blog where we share our views and comments on the technology news of the day.

Tags: 

Learning basic computer and networking technology

ComputerGuru -

Welcome to the Guru42 Universe. Your journey to learning basic computer and networking technology concepts starts here at ComputerGuru.Net

Learning computer networking can be intimidating, like learning a foreign language, so many similar sounding words and phrases, and acronyms everywhere.

Many people attack their understanding of a computer concept in the context of using a dictionary to find the meaning of a word they don't understand. It would be difficult to learn a foreign language simply by using a dictionary as your tool.

Likewise, it is difficult to learn technology concepts simply by looking up specific definitions. We organize the material in sections that can be read like a chapter in a book by topics, rather than simply a list of definitions like a dictionary.

Since 1998, ComputerGuru.net has attempted to provide self help and tutorials for learning basic computer and networking technology concepts, maintaining the theme, "Geek Speak Made Simple."

We continue to receive positive feedback from all over the world about our technology websites as we attempt to presented material more from a personal "lessons learned" perspective than a text book perspective.

In our latest update and expansion we have added sections on desktop computer troubleshooting and Windows Server based on many questions and notes collected over the years. We hope they help you better use and understand technology in your world.

Who is The Guru?

Tom's career in business and technology started with communications and moved to office automation systems long before the acronym "IT" was widely used. As a field service technician and manager for various office automation companies Tom attended numerous customer service training programs and fine tuned his skills in customer service.

As small business networks evolved, Tom's career expanded as well into the areas of networking and systems administration. Working as a consultant to numerous businesses delivering various technology solutions, Tom gained valuable project management experience.

Tom began actively speaking and writing on both business and technology issues since before the internet was widely used by small business. Exploring PC telecommunications and its role in business lead to Tom's first article for a regional business journal on how the average business could use Computerized Bulletin Board Systems (BBS) as a tool for customer service.

Starting as trainer in the Army National Guard, then as a community college instructor, and now as a webmaster and freelance writer, Tom Peracchio has developed a knack of putting complex topics into simple terms, as he likes to call it, geek speak made simple.

As a community college technology trainer, Tom learned that not everyone taking webmaster classes was there to be a technician or engineer. Many people took the classes to appreciate the topics covered so they could communicate more effectively with the technology folks they had to deal with in their roles as business managers.

Through writing and the Guru 42 Universe websites Tom Peracchio shares his technology experiences and insights with a wide variety of technology users to help them use technology smarter to make their life easier. 

The ComputerGuru is Tom Peracchio: IT support specialist, web developer, writer, and technology trainer

Tags: 

What is the best desktop computer operating system?

ComputerGuru -

There is no one size fits all answer to " what is the best desktop computer operating system?" Let me first tackle the differences between Linux, Microsoft, and Apple. Hopefully the tech purists won't beat me up too much for generalizing here.

The arguments of which operating system (OS) is best often focuses on the GUI (graphical user interface). Apple focused on being graphical from the start, and Apple focused on a creating single poweruser desktop computer. They have created their own very successful world.

I work in the world of enterprise computers, that's where many computers are talking together, working together, on local area networks (LANs) and wide area networks (WANs). Some might say I have gone over to the dark side and become a Microsoft fan boy. I bashed Microsoft quite a bit over the years for inefficient operating systems. After spending more than 20 years working with Microsoft products in the enterprise environment I have come to appreciate Microsoft and all the technology they have created.

Linux is a Unix-like computer operating system. When I was teaching I always remember a line from a song when I described Unix, "It wasn't build for comfort it was built for speed." Command line functions, the non GUI stuff, is important to the people who use Unix. A lot of Linux, like Unix, is used by people running it on servers, they don't care about the GUI. That's why there are so many distributions of Linux, some are geared to people using it mainly for server based applications, and some Linux Distros focus on a pretty GUI. Distro is a shortened version of the term distribution. We will discuss popular Linux distros in our next article.



The Linux kernel

Let me use the analogy of building an automobile and say that the operating system kernel is like the engine and drive train of the vehicle. Some people argue the case for Linux based on the assumption that the Linux kernel offers the best engine and drive train to power our computer. That depends, the best for what purpose?

The question often comes up as to why doesn't Windows or Apple create services and applications and applications that work with Linux.

From a programming perspective Microsoft has spent billions of dollars creating services and applications that run on their kernel. What incentive would they have to start creating services and applications specific to a Linux kernel?

Apple seems pretty happy pumping out smartphones, some Apple fans are sad that Apple now appears more focused on phones rather than computers. Apple is the most profitable company on the planet. Why would they start creating services and applications specific to a Linux kernel?

Who has the best GUI?

If we get beyond the argument of why the Linux kernel is the best, the question assumes that we need a  Windows or Apple graphical user interface (GUI) to make the best operating system.  There are many impressive looking GUI’s in the Linux world. Take a look at all the Linux distributions we describe in our next article. Some distros have focused on the server geeks and server functions, some have focused on looking good with pretty GUIs for the desktop crowd.  For instance, Mint is a fork from Ubuntu, which is itself a fork from Debian. Mint was forked off Ubuntu with the goal of providing a familiar desktop GUI.

You can't make money on Linux

There are answers that suggest Apple or Microsoft could not make money supporting Linux. Some people don't understand the concept of open source and believe you can't make money by supporting it.

Richard Stallman, the father of the Open Source software movement, explains that Open Source refers to the preservation of the freedoms to use, study, distribute and modify that software not zero-cost. In illustrating the concept of Gratis versus Libre, Stallman is famous for using the sentence, "free as in free speech not as in free beer."

As Google has shown with Android you can straddle the fence successfully between supporting an open source operating system while still maintaining a fair amount of proprietary components.

As far as Microsoft supporting Linux, in case you missed it, Microsoft recently joined the Linux Foundation.


Why is Microsoft Windows so popular?


It's funny how questions on forums often start with "Why is Microsoft Windows so popular?" and then go on to give reasons why it shouldn't be so popular. Microsoft is popular, that is the reality. The reasons of why it shouldn't be so popular are typical perceptions of Linux users looking to stir up a debate.

Desktop computers and personal computers starting entering homes and offices in the 1980s. The world of what we then called "IBM compatible" was driven by computers that were command line operating systems. That meant you had to type in command, short words and ,to get your computer to perform various tasks. People came up with various menus and interfaces, but the desktop was not very graphical.

The mid 1990s was the perfect storm for Microsoft Windows 95. The world was just discovering the internet as online services began connecting to the internet for the first time. Microsoft began marketing Windows 95 as the Graphical User Interface to the desktop computer, and the graphical world wide web with Internet Explorer. Love them or hate them, Microsoft became the dominant desktop computer that people used in their homes, and connected to the web in the 1990s.

It is that Windows has become the predominant desktop computer operating system in the 1990s, in offices, and schools, that people have little reason to use something different at home. In order to get people to change the differences must be totally seamless.

Many Linux fans will say that Linux has become much easier to use, and the interface much more like Microsoft Windows. Many Linux users will call Windows too complicated and that switching over to Linux is easy. That is a matter of perspective. I have been supporting desktop computers for more than 30 years, I know first hand how people hate change. Give any windows user a different operating system and they will call it complicated, because it is different. When Blackberry's went out of style and people were forced to use Apples and Androids, I heard users complain about how they missed how easy their Blackberry was to use. It was easy because that was what they learned on, and now they were forced to change.

I keep hearing about how all the cool Linux distros are faster, sleeker, better, than Windows, but there has yet to be a computer company that has mass produced a desktop computer with a Linux distro. The closest thing to a home use Linux based computer is the Google Chromebook. I have a Linux computer at home, but it is just a web browser and email reader. Sure there are a few games on it as well. But there are too many applications I use at work that I could never bring home because they won't run on a Linux computer.

I am by no means a Microsoft Fanboy. Over the years I have had strong words for how Microsoft has done things, but in recent years I finding myself defending Microsoft because some of the negativity gets pretty silly at times. I am not going to force myself and my family to use a Linux computer just to prove a point. I don't see myself going down that road anytime soon.o

My perspective is also a bit different that the average home user, I am a systems admin. I need to worry about how well multiple computers play together with multiple users. The computer could be use as a toy, or a set of tools, what works best for you depends on what applications you need to do the job. There is no one size fits all answer to which computer should you use.

Sorting through the buzzwords and standards

In the OSI architecture "the physical layer" is used to describe the fundamental layer of computer networking. In more general terms the physical layer is the carrier of information between computers using a variety of wired and wireless technologies.

In addition to describing the the physical layer in the section on the theoretical OSI Reference Model, we sorting through the terms, breaking down the definitions and standards into smaller topics as they relate to some commonly asked questions.  The pages on the networking hardware are included in the section on common questions and basic computer concepts.

We approach our goal of geek speak made simple from the perspective of a network engineer relating things to specific technology standards, avoiding technology street slang or common buzzwords that are often incorrectly used.

Check out these related articles in your question to understand technology:

The Physical Layer of the OSI model
 

Save

Singularity futurist predicts when humans and machines merge

Guru 42 Blog -

As we study Geek History we explore the visionaries who have an idea and see what is possible, often before the technology exists to make it real. Ray Kurzweil has been a technology visionary since the 1970s when he invented a reading machine for the blind with a text-to-speech synthesizer. In the 1980s Kurzweil created the first electronic musical instrument which produced sound derived from sampled sounds burned onto integrated circuits.

Inventor and futurist Ray Kurzweil believes the day that artificial intelligence becomes infinitely more powerful than all human intelligence combined is not that far off in the future. In his book, "The Singularity Is Near: When Humans Transcend Biology" written in 2006, Kurzweil predicts when this new phase of artificial super intelligence takes place. "I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045"

Is singularity a destination?

So how far is it from here to infinity? How long will it take us to get to eternity?

I often say that the more I learn, the more I realize how little I know. The phrase "You don't know what you don't know" has been said many ways. It is a play on a well-known saying that is derived from Plato's account of the Greek philosopher Socrates, "I know one thing; that I know nothing."

Maybe I am looking at this from my simple minded human perspective, but three decades is a pretty short time period in the evolution of humans and technology. I have the experience of having worked in the field of technology for more than four decades.

I sound like a real old fart when I talk about using radios with tubes in the 1970s and working as various forms of technology as it transitioned to solid state electronics. I remember back in the 1980s when I tried to explain to people how they would be using personal computers as tools in their business plugging them into phone lines. The concept of the internet was not widely known back then.

No one can predict the future with any certainty. Of course, if you want to debate, there were always those visionaries ahead of their time. Leonardo da Vinci is perhaps the greatest visionary to have ever lived. Leonardo saw the possibilities of flying machines in the 1500s, and designed in theory many examples of flying machines, centuries before the Wright Brothers launched their plane at Kitty Hawk. Relatively few of his designs were constructed or even feasible during his lifetime, the scope and depth of his interests were without precedent in recorded history.

There were many people who could look into the future and see what was possible, such as a true visionary Jules Verne, who was quoted in 1865 as saying, "In spite of the opinions of certain narrow-minded people who would shut up the human race upon this globe, we shall one day travel to the moon, the planets, and the stars with the same facility, rapidity and certainty as we now make the ocean voyage from Liverpool to New York."

One of my favorite science fiction authors I read growing up was Isaac Asimov who told amazing stories of robotics and artificial intelligence. The technology of the 1940s and 1950s could not create the robots in the stories of Asimov. Today the stories of intelligent robots are no longer fiction.

Maybe I've read too many science fiction novels about the utopias and the dystopias? When I say, "You don't know what you don't know," I look at the examples given here. With every generation we are amazed with how far we have come as we look back to the past. But we also see the long journey ahead and are equally amazed as we look towards the future.

Save

Tags: 

Singularity futurist predicts when humans and machines merge

Guru 42 Blog -

As we study Geek History we explore the visionaries who have an idea and see what is possible, often before the technology exists to make it real. Ray Kurzweil has been a technology visionary since the 1970s when he invented a reading machine for the blind with a text-to-speech synthesizer. In the 1980s Kurzweil created the first electronic musical instrument which produced sound derived from sampled sounds burned onto integrated circuits.

Inventor and futurist Ray Kurzweil believes the day that artificial intelligence becomes infinitely more powerful than all human intelligence combined is not that far off in the future. In his book, "The Singularity Is Near: When Humans Transcend Biology" written in 2006, Kurzweil predicts when this new phase of artificial super intelligence takes place. "I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045"

Is singularity a destination?

So how far is it from here to infinity? How long will it take us to get to eternity?

I often say that the more I learn, the more I realize how little I know. The phrase "You don't know what you don't know" has been said many ways. It is a play on a well-known saying that is derived from Plato's account of the Greek philosopher Socrates, "I know one thing; that I know nothing."

Maybe I am looking at this from my simple minded human perspective, but three decades is a pretty short time period in the evolution of humans and technology. I have the experience of having worked in the field of technology for more than four decades.

I sound like a real old fart when I talk about using radios with tubes in the 1970s and working as various forms of technology as it transitioned to solid state electronics. I remember back in the 1980s when I tried to explain to people how they would be using personal computers as tools in their business plugging them into phone lines. The concept of the internet was not widely known back then.

No one can predict the future with any certainty. Of course, if you want to debate, there were always those visionaries ahead of their time. Leonardo da Vinci is perhaps the greatest visionary to have ever lived. Leonardo saw the possibilities of flying machines in the 1500s, and designed in theory many examples of flying machines, centuries before the Wright Brothers launched their plane at Kitty Hawk. Relatively few of his designs were constructed or even feasible during his lifetime, the scope and depth of his interests were without precedent in recorded history.

There were many people who could look into the future and see what was possible, such as a true visionary Jules Verne, who was quoted in 1865 as saying, "In spite of the opinions of certain narrow-minded people who would shut up the human race upon this globe, we shall one day travel to the moon, the planets, and the stars with the same facility, rapidity and certainty as we now make the ocean voyage from Liverpool to New York."

One of my favorite science fiction authors I read growing up was Isaac Asimov who told amazing stories of robotics and artificial intelligence. The technology of the 1940s and 1950s could not create the robots in the stories of Asimov. Today the stories of intelligent robots are no longer fiction.

Maybe I've read too many science fiction novels about the utopias and the dystopias? When I say, "You don't know what you don't know," I look at the examples given here. With every generation we are amazed with how far we have come as we look back to the past. But we also see the long journey ahead and are equally amazed as we look towards the future.

Save

Tags: 

When the internet is down my radio still works

Guru 42 Blog -

From time to time events in the world remind us that modern technology has limits, as we recently saw with the problems with Amazon Web Services, that took down many major web sites. People were having panic attacks because they were having issues getting to their favorite website.

Theoretically the internet was created to be a better more fault tolerant communications system. As the internet has exploded commercially it has become the exactly the opposite of the original goal. It has created the biggest single point of failure in our world. People forget there are other ways of doing things without using the internet, like using traditional broadcast radio for news and entertainment.

It scares me that some people think that we should use the internet for everything. Instead of making any more comments based on my subjective opinion, I felt inspired to do a little research.

It would appear that traditional radio is still alive and well.

Here are some snippets from Pew Research on radio broadcasting:

"... terrestrial radio continues to reach the overwhelming majority of the public."

As far as using radio for a source of news and information:

"Pew Research Center’s own survey work adds insight here, finding radio to be a common source of news among adults in the U.S. In research asking about how people are learning about the U.S. presidential election, 44% of adults said they learned about it from radio in the past week. "

Source: Pew Research Center Audio: Fact Sheet

To those who say terrestrial radio (traditional broadcast radio) is dead, might be surprised to see that the Pew research numbers show that the percentage of Americans ages 12 or older who listen to terrestrial radio weekly has remained pretty steady at over 90% for the years 2009 through 2015.

Source: Audio: Weekly radio listenership (terrestrial)

Why not always use the internet?

You use the simplest tool you need to solve a problem, why make things more complicated than they need to be?

I want to kick back after dinner, and unwind watching some mindless entertainment. I watch television. The internet can be a pain at times. Connections are slow, websites are take too long to load. Sometimes the alternatives to using the internet are more efficient.

I want to sit on the porch, enjoy a beverage, and relax. I listen to the radio. It is quick and simple. Why would I use anything else?

I am driving in the car, I want some background music to pass the time. I listen to the radio. Why do I need the internet?

What if the power goes out? What happens then? Will my wi-fi work? Or I just could listen to my battery powered radio to connect to the world.

Need any more examples?

Why it makes sense to receive FM Radio on your cell phone

Does it makes sense to eliminate FM radio in favor of digital?
 

Tags: 

Pages

Subscribe to Geek History aggregator - Geek News