Geek News

Who discovered electricity?

GeekHistory II -

Asking who discovered electricity is the equivalent to asking who first discovered fire. Electricity existed before humans walked the earth. You could probably make the case that the first human to discover fire also discovered electricity as they watched a bolt of lightning strike the earth to start a fire. The bolts of static electricity we see in the sky in the form of lightning during a thunderstorm show the power of electricity.

Ancient writings show that various cultures around the Mediterranean knew that rods of amber could be rubbed with cat fur or silk to attract light objects like feathers. Amber is fossilized tree resin gemstone used in making a variety of decorative objects and jewelry. Amber has been used as a healing agent in folk medicine. The first particle known to carry electric charge, the electron, is named for the Greek word for amber, ēlektron.

If you are looking for a name of someone "who discovered electricity" you could possible look to the Greek philosopher Thales of Miletus (624 B.C. to 546 B.C.). Thales was known for his innovative use of geometry, but his writings are some of the first to document the principles of magnetism and static electricity. Thales documented magnetism through his observations that loadstone attracts iron, and static electricity through his observations of static electricity by rubbing fur on substances such as amber.

Some stories claim that various artifacts found shows some electricity production was possible in the Middle East thousands of years ago. For telling the story here at Geek History, and busting the myth that Benjamin Franklin discovered electricity we will start in more modern times offering the name of William Gilbert as the first person to define electricity around 1600. Each person on the list that follows contributed to our modern understanding of electricity.

William Gilbert (1544-1603) is regarded as the father of electrical engineering and one of the first scientists to document the concept of electricity in his book De Magnete published in 1600. William Gilbert made a careful study of electricity and magnetism and defined the distinction between electricity and magnetism in his series of books. Gilbert coined the term electricity from the Greek word elecktra.

Robert William Boyle (1627-1691) is regarded as the first modern chemist and one of the pioneers of modern experimental scientific method. Boyle is also credited with experiments in the fields electricity and magnetism. In 1675, Boyle published "Experiments and Notes about the Mechanical Origine or Production of Electricity."

Benjamin Franklin (1706 - 1790) is often credited in various books and websites as having discovered electricity in the 1750s. The legendary story of Franklin's experiments with flying a kite in a thunderstorm allegedly took place in 1752. Although Franklin was quite a scientist and inventor, which included inventing the lightning rod, scientists such as William Gilbert and Robert William Boyle began documenting the concept of electricity long before Franklin's experiments.

Alessandro Volta (1745-1827) was an Italian physicist that is regarded as one of the greatest scientists of his time. Before we move on to the next section where we look at AC power distribution we give thanks to Alessandro Volta, the scientist who discovered that particular chemical reactions could produce electricity. Volta invented the first battery in 1799 known as the Voltaic Pile. The unit of electromotive force, the volt, was name to honor Volta.

Michael Faraday (1791-1867) British physicist and chemist, demonstrated the first simple electric motor, in 1821, in London. The original "science guy," in 1826 Faraday founded the Friday Evening Discourses and in the same year the Christmas Lectures for young people at the Royal Institution. In 1832 Faraday demonstrated that three types of electricity thought to be different that induced from a magnet, electricity produced by a battery, and static electricity were in fact all the same. Faraday introduced several words into the electricity vocabulary such as ion, electrode, cathode, and anode.

James Clerk Maxwell (1831-1879) introduced his mathematical conceptualization of electromagnetic phenomena to the Cambridge Philosophical Society in 1855. The Scottish physicist's best-known discoveries concern the relationship between electricity and magnetism and are summarized in what has become known as Maxwell’s Equations. Maxwell's pioneering work during the second half of the 19th century unified the theories of electricity, magnetism, and light.

Graphic: Long before television Michael Faraday nineteenth century scientist and electricity pioneer took science to the people as illustrated here delivering the British Royal Institution's Christmas Lecture for Juveniles during the Institution's Christmas break in 1856.

Learn More:

George Westinghouse used Tesla power to defeat Edison in Currents War

Tags: 

README 1ST GeekHistory II the sequel

GeekHistory II -

The idea for the website GeekHistory started when I was teaching Internet and web building courses in 1996. I would start each course with a brief history lesson showing the evolution of the internet that started in the 1960s. Some students commented that it was a boring waste of time, some students praised it as an interesting and information introduction to the course.  It seems that history is a topic that people either love it or hate it.

Because of many positive comments by students on the brief history on the internet lesson I registered the domain GeekHistory.com back in 2001 with the hopes of developing a history of technology website. I still have a lot of notes collected over the years. With web site URLs as references for my material. some of my resources are notes from websites that no longer exist. Very few of the sites still exist in the from they did back then. I found a lot of good reference material on the Altavista website. Thankfully I printed a lot of that content and have paper copies of the material in a binder.

GeekHistory was just a shell of a website for many years, just an idea bouncing around in my brain. After more than a decade of owning the domain name GeekHistory.com I finally started devoting time to building the website on the history of technology. In recent years I have immersed myself into research on various topics, looking for the original sources, in order to tell the story of the history of technology based on various generations of ideas and timelines.

We are developing the website GeekHistory like a book with chapters focused on various generations of inventors and inventions.  As we sort through all the information we have gathered over the years, and continue to sort through, we decided to create the companion website GeekHistory II more in the format of an almanac with various lists, fast facts and quick answers to simple questions.

The goal of GeekHistory

My lifelong love of history and technology comes together at GeekHistory. I began working with radios and telecommunications in the Army National Guard in the 1970s and my first certification was a FCC general class radiotelephone license. A life long evolution from field service technician for various office automation companies through my current career in systems administration and telecommunications has inspired me as a writer and web developer of technology topics.

Even though my personal collection of material for the study of geek history dates back to my early days in technology as far back as the 1970s, I am always finding new questions and new myths and legends to address. Through question and answer, Twitter wars, and various other social media outlets, I keep running across myths and misinformation represented as facts, sending me off on a quest to find the truth. Anytime a claim is made or a fact is stated from a website or blog that does not appear to have first hand knowledge of the subject I make a note to follow up on it.   I am continuously finding articles by allegedly credible newspapers and magazines and respected organizations that are based on popular myths, which sets me off in search of original sources of information to find the truth.

I am not a university professor with a team of editors and advisers working with me developing a website. I am one man who loves technology and history and is amazed by how little people know about the great minds in the world of technology. Geek History is not meant to be an authoritative source for technology history. We are just trying to get you to think about the many amazing people that have contributed to the work of technology. Our goal is to increase awareness, educate, and entertain.

One of my inspirations for the Guru42 Universe is the Oliver Wendall Holmes quote, "Man's mind once stretched never goes back to its original dimension." The more I learn about geek history, the more questions I have, and the more I want to know.

The who invented myth and eureka moment that never happened

GeekHistory II -

Every question that begins with "who invented" should get this as an auto response, "it is usually a fallacy to credit a single individual with the invention of a complicated device. Complicated devices draw on the works of multiple people."

We spend a lot of time looking where to give credit to people for various invention when they were nothing more than the next step in the evolution of the world of technology.

Inventions during the Industrial Revolution involved a series of new devices and creations where man power, and literally horse power, was being replaced by machines. From steam engines that turned manual labor in mechanical contraptions, to the automobile, that turned the horse power of a live horse, to the horse power of an internal combustion engine. The inventions of the industrial age were an evolution of doing existing things in very new ways. The 18th century idea of an invention was genuinely more individual and less systemic.

It was a different world in the industrial age of the late 1800s and early 1900s. The greatest minds and the greatest laboratories were not inventing things at universities, but were working in what resembled an industrial machine shop. Thomas Edison institutionalized the concept of the individual inventor, his invention factory took the concept of one man in a lab tinkering with an issue and changed it into project management where one man hired a team to do more than he could as an individual. People say that Edison stole ideas because he had other people do the experiments and he took credit. No, that was the real genius, he created the invention factory. There are many menial tasks that need done, he automated the process.

When the internet and personal computers were being developed in the 1960s and 1970s, most of the geeks were doing their work at universities, much of the work sponsored by government agencies like DARPA (Defense Advance Research Projects Agency.)

What does it take to become a great inventor?

Being an inventor is not a field of study, it is a state of mind. Great inventors, innovators, industrialists, all had one thing in common, a passion for their ideas, and a passion to turn their visions into reality. There are endless stories of "inventors" who were always tinkering with things. They had a burning desire to understand how things worked.

Using a tree branch to help us pry something apart, we have invented a lever. Using a tree trunk that rolls to help us move something heavy, rather than dragging it across a flat surface, we have the beginnings of a wheel. As these very simple solutions to very simple problems became refined, they become inventions.

The nature of man is solving problems, and the solutions to these problems are inventions. And the successful inventor will tell you, it is more than just having an idea, it is turning that idea into something people can use.

Inventor or innovator?

Often there is a bit of a smug attitude that favors giving someone credit for an invention versus just being an innovator. A good example for my thought is remarks I've seen is regarding Henry Ford, "he didn't invent anything."

Even if Henry Ford invented nothing, he changed everything. Ford did not invent the automobile, Ford did not invent the assembly line. What Ford did is improve upon the assembly line with a passion that drove down the price of an automobile significantly. He turned the automobile from just a rich man's toy, to something the average American could afford. Ford improved upon the design of the automobile and the assembly line and revolutionized an industry.

The concept of the automobile, and specifically the electric automobile, is an idea that has been around for more than 100 years. Henry Ford thought about electric automobile, as did other inventors, over a hundred years ago. But what is one of the hottest topics in modern technology? The electric car? There is a fascination in recent years of the work of Tesla Motors and recently Faraday Future made news with the showing of a new electric automobile prototype.

Isn't technology an ongoing evolution of ideas and innovations? Do you see the work of modern electric car companies like Tesla Motors and Faraday Future as inventing new things or combining existing things? The more important question I would ask, is why does that distinction even matter?

In search of the glorified eureka moment

There are many special individuals have those eureka moments, where one idea changes everything. There are visionaries who have an idea and see what is possible before the technology exists to make it real. There are inventors who take visions and made them real. There are innovators who take a good invention and make it great. There are the industrialists who take an invention and develop it into an industry.

Study people to learn from their success, and their failures. Try to understand when a burning desire can turn into a dangerous obsession.

Question everything. Find something that really interests you, and learn everything you can about the topic. How does it work, how could it be made better.

Geeks introduce us to brave new worlds, with visions of the future. Geeks pick up where others left off, to turn a vision into a reality.

Tags: 

Wondering about the dark web and the forbidden fruit of the internet

Guru 42 Blog -

The phrase forbidden fruit typically refers to engaging in an act of pleasure that is considered illegal or immoral. That fits the mold of many questions I am often asked, such as what are some of the illegal or immoral websites you can find on the mysterious and mythical part of the internet known as the dark web.  The mysterious dark web, sometimes called the dark net, is the fuel for spy movies. it helped to create WikiLeaks run by the super spy Julian Assange and it allows cyber snitches like Edward Snowden share secret information. People are axious to know how to find what is hinding beneath the surface in the dark web.

According to remarks made by Roger Dingledine at a recently Philly tech conference, the overall perception of the dark web is more mythical than factual.  Roger Dingledine is an MIT-trained American computer scientist known for having co-founded the Tor Project, aka "the dark web."  Dingledine spoke at the Philly Tech Week 2017 putting some of the myths and legends of "the dark web" into perspective.

The worldwide network known as “the dark web” uses specially configured servers designed to work with custom configured web browsers with the purpose of hiding your identity. You will see the term Tor servers and web browsers to describe this private network. Tor originally stood for "The Onion Router."  The Tor Project, Inc is a Massachusetts-based research-education nonprofit organization founded by computer scientists Roger Dingledine, Nick Mathewson and five others. The Tor Project is primarily responsible for maintaining software for the Tor anonymity network.

If you are looking for all that forbidden fruit hiding beneath the surface, according to Dingledine no more than one to three percent of the Tor Network’s traffic comes from “hidden services” or “onion services”, services that use the public internet but require special software to access. Dingledine claimed that onion services basically do not exist. He added that it’s a nonsense that there are “99 other internets” users can’t access.

One popular way often used to describe the deep web and dark net is to use a graphic of an iceberg. Dingledine advises his audience not to pay attention when someone uses the iceberg metaphor, and criticized the news providers who use the “iceberg metaphor” for describing the darknet and the deep web.  According to Dingledine, just about any use of the “dark web” phrase is really just a marketing ploy by cybersecurity firms and other opportunists.  So the forbidden fruit you were hoping to find really is just a myth after all.

Learn more:

People are fascinated about what you can find on the dark web, but have no idea what it all means. Learn more from Guru42 in this article where I go over the basic definitions with links to learn more: Buzzwords from the world wide web to deep web and dark net

Referencing Roger Dingledine at Philly Tech Week 2017 here are some links about that event:

Stop Paying Attention When Someone Uses The Iceberg Metaphor For The Dark Web

Stop talking about the dark web: Tor Project cofounder Roger Dingledine

 

Save

Save

Save

Save

Tags: 

What you need to know before buying a computer

Guru 42 Blog -

At last the secret of what you need to know before buying a computer is revealed, there is no one size fits all answer. But you don’t need to be a world class geek to learn computer buzzwords and understand some basic concepts before you shop for your next computer.

I usually try to stay out of the Apple versus Microsoft debates. Since I am updating some content on desktop operating systems on Computerguru.net I thought I would use this blog post to address the often asked question of "what computer should I buy" and add this perspective. I will also  introduce a few new articles to answer some frequently asked questions relevant to someone shopping for a computer.

Recently on an online forum the question of "what computer should I buy" was asked based on the idea that a MacBook Pro is inherently the best laptop out there. The person asking the question was looking for reasons to buy a MacBook Pro, but gave no clues on how they are going to use it. That is a very important factor in answering the question! I never answer any questions on "what computer should I buy" for friends and family until I ask several questions.

I laughed as I read one of the answers that stated, "If all you are going to do is web surfing, social media, and email you don’t need a MacBook Pro." Yea, that's right. There are Chromebooks as well as cheap Windows notebooks that could do that for a lot less money!

My best advice to anyone looking to buy a computer, think long and hard about how you are going to use it, and find other people with the same wants and needs, and ask them what they own, what they like and not like about it.

I am not a graphics designer or an artist, those are the type of users who are typically the Apple fans. I have been working in enterprise computer networking for more than 20 years, started working on desktop computers in the 1980s. I look at the computer as a tool, and I look at what is the best tool for the task at hand. I have no loyalties to any specific brands.

Many answers comparing Microsoft to Apple often use various luxury car to cheap foreign comparisons, implying if you could afford the expensive luxury car, but choose otherwise, you must be a fool. So let me run with that analogy.

Take a step back and look at the history of Apple versus Microsoft.  In the 1990s when Windows 95 dominated the desktop, Microsoft was the Ford F-150 pick up truck.  Not many people would describe the Ford F-150 pick up truck as a sexy luxury vehicle, but many would describe it as the work horse vehicle that gets the job done.  There's a good case to be made that the folks marketing to the pick up truck users have a different plan than those looking to sell the sexy luxury vehicle.

A computer is a tool I use for work, as well as recreation. I work in a business world that is Microsoft based. We are required to purchase a specific brand of Windows based computers, not my favorite brand, but that's my environment. My problems are no so much with Windows as it is the vendors that support our users create applications that run on old Microsoft operating systems. I have to deal with home cooked applications that are designed for last generation Windows computers. That's my world.

I have had iPads and various other Apple products in my home, and they never got used. Even if the interface is slightly different, I don't have time to deal with it. I have had access to Kindles and Nooks, and they never got used. I can put an application on my Windows notebook that reads the books, so why do I need to learn a new interface? It's called being lazy, I know it is, but I have no personal reason to care about Apple products. It's nothing personal.

If one of my family members wants to buy a luxury car, I will be happy to ride in it. If money were no object, tomorrow I would go out and buy a new Ford F-150 pick up truck that best suited my needs.

I don't get emotionally attached to my computers or automobiles. They are tools. Nothing more.

You too can understand computer buzzwords

Since 1998, ComputerGuru.net has attempted to provide self help and tutorials for learning basic computer and networking technology concepts, maintaining the theme, "Geek Speak Made Simple." Recently I updated the Drupal content management software for Computerguru and updated a few pages.

Based on commonly asked questions, I have added several new pages to the section Common technology questions and basic computer concepts. On computer operating systems we have added an article that explains the major differences between desktop computer operating systems and one on installing Linux and understanding all the different Linux distributions.

I get a lot a questions on computer cables and finally finished up this article on Ethernet computer network cable frequently asked questions answered and an article explaining computer network modular connectors and telephone registered jacks.

And based on many questions on printers, we had some fun coming up with this article, the ugly truth about computer printers.

Yes, I know that sounds like a lot of geek speak, but we do our best to break it all down into small bite sized chunks, so it is easy to digest.  Please take a few minutes to check out the new content, and please share it with your geek friends on social media.

Any topics need covered? Any questions missing?

Are there any buzzwords bothering you?  Something else you would like us to cover here at the Guru 42 Universe?  Let us know: Guru 42 on Twitter -|- Guru 42 on Facebook -|- Guru 42 on Google+ -|- Tom Peracchio on Google  

Save

Tags: 

Wireless Networks in Simple Terms WLAN and Wi-Fi defined

ComputerGuru -

The term Wi-Fi is often used as a synonym for wireless local area network (WLAN). Specifically the term "Wi-Fi" is a trademark of a trade association known as the Wi-Fi Alliance. From a technical perspective WLAN technology is defined by the Institute of Electrical and Electronics Engineers (IEEE).

In computer networking everything starts with the physical layer, which for many years was a copper wire. The physical layer was expanded to include anything that represent the wire, such as fiber optic cable, infrared or radio spectrum technology.

Wireless network refers to any type of computer network that is not connected by cables of any kind. While cell phone technology is often discussed as a form of wireless networking, it is not the same as the wireless local area network (WLAN) technology discussed here.

What is Wi-Fi?

The term Wi-Fi has often been used as a technical term to describe wireless networking. Wi-Fi is actually a trademark of the Wi-Fi Alliance, a global non-profit trade association formed in 1999 to promotes WLAN technology. Manufacturers may use the Wi-Fi trademark to brand products if they are certified by The Wi-Fi Alliance to conform to certain standards.

A common misconception is that Wi-Fi is an acronym of Wireless fidelity, it is not. The Wireless Ethernet Compatibility Alliance wanted a cooler name for the new technology as the IEEE 802.11b Alliance was not all that catchy. The marketing company Interbrand, known for creating brand names, was hired to create a brand name to market the new technology, and the name Wi-Fi was chosen. The term 'Wi-Fi' with the dash, is a trademark of the Wi-Fi Alliance.

IEEE 802.11 defines WLAN technology

The actual technical standards for wireless local area network (WLAN) computer communication are know as IEEE 802.11. IEEE refers to the Institute of Electrical and Electronics Engineers a non-profit professional association formed in 1963 by the merger of the Institute of Radio Engineers and the American Institute of Electrical Engineers.

IEEE 802 refers to a family of IEEE standards dealing with networks carrying variable size packets, which makes it different from cell phone based networks, 802.11 is a subset of the family specific to WLAN technology. Victor "Vic" Hayes was the first chair of the IEEE 802.11 group which finalized the wireless standard in 1997.

This link takes you to the 802.11 specification that contains all the geek speak on how it works. --> IEEE-SA -IEEE Get 802 Program
https://standards.ieee.org/about/get/802/802.11.html

How fast is Wi-Fi?

Wi-Fi speed is rated according to maximum theoretical network bandwidth defined in the IEEE 802.11 standards.

For example:

IEEE 802.11b - up to 11 Mbps

IEEE 802.11a - up to 54 Mbps

IEEE 802.11n - up to 300 Mbps

IEEE 802.11ac - up to 1 Gbps

IEEE 802.11ad - up to 7 Gbps

If you look at the IEEE 802.11 Wireless LANs standards you will see the ongoing evolution with several standards under development at this time to increase speeds even more.

Keep in mind that WiFi speed is how fast your internal network is, as in wireless LANs (Local Area Network)

Fast WiFi does not mean fast internet connection, it has nothing to do with the speed or bandwidth of you internet access.

How does Wi-Fi work?

A Wi-Fi enabled device such as a personal computer or video game console can connect to the Internet when within range of a device such as a wireless router connected to the Internet. wireless local area network (WLAN) technology allows your device to connect to the router, which in turn connects you to the internet.  In order to connect to the internet, you need a unique IP (internet protocol) address. On your home network, when your router is connected to the internet, it has a public address, that is the one that faces the internet, and is unique in relationship of other routers on the internet.

Your router also has a local IP Address of something like 192.168.1.2 and this is a private IP address space. Addresses beginning with 192.168 cannot be transmitted onto the public Internet and are typically used for home local area networks (LANs). If you have four home computers, your router creates a home network and the four home computers have a unique number in relationship to each other. Your local computers connect to the router, either by a wire plugged into the router, or through a wireless signal.

Routers are used to create logical borders between networks, and in this allow a gateway, such as an access point to the internet to be shared. In geek speak terms subnetting can be very complex, but what is happening here is the process know as subnetting.

Tags: 

The ugly truth about computer printers

ComputerGuru -

The printer is the source of pain and problems for every computer user.  The ugly truth about computer printers is that everyone has one and they all stink.

A printer is very mechanical, there are a lot of moving parts.  Every printer from the very simplest, to the most complex, has numerous gears, springs, and rollers that all need to move in perfect harmony in order for your printer to work.  

In understanding why computer printers are a source of frustration, let me explain some of the other components of a typical computer system. On your home desktop computer you have a large box that everything plugs into. I hear people call this box a CPU, some call it a hard drive.  Technically the CPU is one small part on the main circuit board that sits inside that box.  The main circuit board, as well as the CPU and memory modules that plug into are solid state, that means they are all electronic. Unless you get hit with a power surge or some external electrical issue, it is rare that the electronics of a computer wears out over time. Even hard drives that once were very mechanical are now becoming solid state, which means no moving parts and much more reliable.

Same thing with your display, what we used to call a monitor.  Back in the days of CRT Monitors, the CRT (Cathode Ray Tube) wore out over time, it degraded because it heated up. In my experiences over the years I've seen some monitor failures. Not so much with modern displays, like the computer itself, they are now all electronic and less likely to degrade over time.

Things like keyboards and mice still have a few mechanical parts to them, but they don't wear out often.  When they do wear out, they are simple to replace, and people don't get too excited when they need replaced.

But alas, the printer, the pain of every computer user.  You just typed that report and you need it now.  You are leaving for the movies and you want to print the tickets, and the printer won't work.  There is never a convenient time for the printer to break.  

Even the simplest of printers has a handful of gears, springs, and rollers, that wear out over time.  The paper tray gets banged around every time you fill it up.  Every time someone takes out a paper tray, they bend something, they twist something, a part gets knocked off.  With the need to lower the cost of the printers, many of these mechanical parts are made from very low quality metal and plastic.

And here is one element of printers that many people over look, the paper.  When the air gets dry, when the heat is on in the winter, the paper gets full of static electricity, so it jams more often.  Instead of taking the paper out of the tray, fanning it a bit, flipping it over, you bang the paper tray a few times.  Maybe you yank the paper out when it jams, bending and stretching the metal arms and guides on the paper tray.

When the weather is damp and humid, that will also cause the paper to jam. Do you close the wrapper on your paper when it is just laying around?  Or is it just thrown on a shelf outside the wrapper?  I have seen many print quality issues caused by paper. Having spent a long career in office automation and computer networking I could write a book on the subject of printer problems because of paper.  The hardest part in answering this was keeping it brief.

Types of printing technology

Another issue you have with printers is consumable supplies like ink and toner. Every freaking printer model has its own unique ink or toner cartridge.  When you try to save money by refilling cartridges it is a crap shoot.  More often than not I have seen refilled cartridges cause many problems.

In the early days of desktop computers the dot matrix printer was the standard.  They could be pretty noisy as the small needles in the print head fired through the ribbon creating dots of information on your paper. Ribbons faded over time, and copy quality was not great, but printer ribbons were fairly inexpensive compared to modern ink cartridges. The boxes of paper with the tractor feed holes seems a little primitive compared to the plain paper printers of today, but in many ways the tractor feed paper was a more problem free solution than many of the modern printers with paper trays.

Inkjet printers began replacing dot matrix printers offering higher quality. A less noisy printer with higher quality could be a blessing, instead the inkjet technology was more of a curse. The color inkjet printer uses multiple color ink cartridge that includes a print head as a part of a replaceable ink cartridge that adds to the expense of the cartridge. The cartridges themselves have very narrow inkjet nozzles that are prone to clogging, and they dry out over time. New technology intelligent ink cartridges that communicate with the printer add another level of complexity, and another potential point of failure

Laser printers have been around since the very early days of desktop computers. They are high quality printers, but were for many years, very high cost.  In the early days it was rare to have a laser printer on your home computer, but over the years the quality has increased, and the price has dropped dramatically.  You can get a low cost black print laser printer for less than a hundred dollars. That is what I have in my home office, I have given up on low cost ink jet printers. Most of the times I use my home office laser printer to print a document such as a receipt, or maybe my tickets for a movie or sporting event, I don't need color for that.

The price of a laser printer toner cartridge sounds expensive, the last one I replaced was over $50, but they last ten times longer than an ink jet cartridge. If you look at it on a cost per copy basis, a laser printer is significantly cheaper to own than an ink jet. If I really need a high quality color copy, I can take a document on a USB drive to a local shop and get one there.

Prices have been dropping in recent years, and color laser printers cost a fraction of what they once cost.  If you need a color printer and print more than a few copies a month, do some calculations on the cost per copy of a color laser printer.  You might be surprised to see that over the long haul a color laser printer is not as expensive to own as an ink jet.

It's not your fault for buying a crappy printer

Between having a home computer system as well as working in the field of office automation and business machines since the early 1980s, I have worked with numerous brands of printers and printing equipment. It is hard to recommended a specific brand or specific model of printer at any time because they are constantly changing. In a marketplace that is always shopping for low cost, often a manufacturer will cut corners to lower costs, and a usually reliable brand will have some really horrible models.  

We are discussing the computer printer here as a hardware device, but software issues such as finding the proper drivers for your current computer operating and getting Wi-Fi to work on your network can also create problems. Shop wisely, read over consumer reviews of the currently popular printers to see the potential problems for a model you are considering buying.

The primary reason for a printer being the most likely part of your computer system to cause you pain comes down to the printer having the most moving parts, but there are also many other issues dealing with the supplies such as paper, ink, and toner. Maybe you won't feel any better about all the printing problems you are having after reading this article, but at least you will know, it's not your fault for buying a crappy printer, they all stink.

Tags: 

Buzzwords from the world wide web to deep web and dark net

Guru 42 Universe -

There are a lot of definitions that get thrown around about “the deep web” and “the dark web.” It is frustrating how people use the terms without a clue as to what they mean. The deep web and dark web are NOT synonyms!

Starting with defining "The Internet," think of all the wires and connections as a highway system. When I talk about the general term of the internet, I am speaking about the technologies that move packets of information along wires from one destination to another, specifically the family of protocols known as TCP/IP (transmission control protocol - internet protocol).

The "World Wide Web” represents the many destinations that are connected together using the public highway system of the internet. When I talk about the general term of the World Wide Web, I am speaking about the technologies that create websites and webservers such as HTTP (hypertext transfer protocol) and HTML (hypertext markup language).

Where it gets confusing is how you apply the usage of the terms. Sometimes when people say "the internet" they are not describing just the highway system, but they are using the term to represent all the websites in existence. Likewise, often when people say “The World Wide Web” they use it to mean all the websites in existence.

The technology that the internet uses on the public highway, things like the internet protocols like TCP/IP and World Wide Web components HTTP and HTML, can also be used to take us to private destinations as well. This collection of private destinations is known as the "Deep Web." Computer scientist Michael Bergman, founder of search indexing specialist company Bright Planet is credited with coining the term deep web in 2001 as part of a research study.

In 2014, a Forbes article, "Insider Trading On The Dark Web"(1), completely confuses the terms and misquotes the definitions of BrightPlanet CEO Michael Bergman and incorrectly describes Bright Planet as "a firm that harvests data from the Dark Web." In response to confusion about the terms Deep Web versus Dark Web BrightPlanet published the article, "Clearing Up Confusion – Deep Web vs. Dark Web." (2)

The link to the BrightPlanet article is listed at the end of this article, but here are a few points from that article which define the main points.

- "The Surface Web is anything that can be indexed by a typical search engine like Google, Bing or Yahoo."
- "...the Deep Web is anything that a search engine can’t find."
- "The Dark Web then is classified as a small portion of the Deep Web that has been intentionally hidden and is inaccessible through standard web browsers."
- "The key thing to keep in mind is the Dark Web is a small portion of the Deep Web."



Why does the "deep web" have much more content than the "regular web" since it's used by far fewer people?

Here's an analogy that might help you understand why there is so much more information "below the surface" on private networks, than above the surface on public networks.

Go to the downtown of an average city where you can find a variety of commercial office buildings. Some of the buildings have a lobby, where you can go inside and walk around. Some buildings might actually have a common area where the general public can walk around freely and access various bits of information, like the lobby of a bank or insurance company. But on the floors above the lobby are offices which require special privileges to access, you must have a need to get into these rooms.

Likewise, you might have a government building where the first floor might contain a post office or some other public service agency that anyone can access. But the floors above it could contain other types of offices where admission is restricted, or accessed by invitation only.

In your downtown area, how many of the buildings can you walk around freely, and how many have controlled access? Are there buildings that you can not walk around in at all because they are privately owned and don't allow access to the general public?

I could expand the analogy further, but hopefully you start to see that in the "real world" of your downtown area there will places that are open to the public, and other areas with various degrees of access limitations. Likewise in the virtual world of the web, there there will places that are open to the public, and other areas with various degrees of access limitations.

The deep web does not mean some dark and mysterious place of evil, it is simply a term describing an area of controlled access rather than free and open access.

What is the dark web and how do you access it?

Going back to the analogy that the deep web represents the buildings in your town that don't allow access to the general public, the dark web represents all the back alley doorways that are not clearly marked and are accessed by knowing what to say to the doorman to gain access to what is inside.

The worldwide network known as “the dark web”uses specially configured servers designed to work with custom configured web browsers with the purpose of hiding your identity. You will see the term Tor servers and web browers to describe this private network. Tor originally stood for "The Onion Router."

Tor receives funding from the American government but operates as an independent nonprofit organization. The dark web is an interesting place as described in a Washington Post article that explains how the NSA is working around the clock to undermine Tor's anonymity while other branches of the federal government are helping fund it.(3)

A Wired article explains how WikiLeaks was launched with documents intercepted from Tor.(4) You can follow this link to an interview with former government contractor Edward Snowden (5) explaining how Tor is used to create private communications channel.

What can you find on the dark net?

The mysterious dark web, sometimes called the dark net, is the fuel for spy movies. it helped to create WikiLeaks run by the super spy Julian Assange and it allows cyber snitches like Edward Snowden share secret information.

Because the dark net is hidden, and the people that are hiding are doing their best not to be found, knowing the what goes on in the dark can be as mysterious as the name implies. For example one study that claims that nearly half of the sites on the dark net are not doing anything illegal.(6) But a different study that claims that 80% of dark net traffic is related to child abuse and porn sites.(7)

Various names have been used to describe the dark net such as the black internet, to suggest it is the home of online black markets. And the claims of the black internet are supported when a well know online drug black market gets busted. (8)

But does anyone really know what we could find on the dark net? What could you find in your city if you started knocking on doors in dark alleys? Would you want to guess?

Learn more:

Internet and World Wide Web visionaries ponder surviving world war

Who invented the world wide web?

References:

(1) Insider Trading On The Dark Web https://www.forbes.com/sites/realspin/2014/03/25/insider-trading-on-the-dark-web/

(2) Clearing Up Confusion – Deep Web vs. Dark Web. https://brightplanet.com/2014/03/clearing-confusion-deep-web-vs-dark-web/

(3) The NSA is trying to crack Tor. The State Department is helping pay for it.
https://www.washingtonpost.com/news/the-switch/wp/2013/10/05/the-nsa-is-trying-to-crack-tor-the-state-department-is-helping-pay-for-it/

(4) WikiLeaks Was Launched With Documents Intercepted From Tor
https://www.wired.com/2010/06/wikileaks-documents/

(5) This is What a Tor Supporter Looks Like: Edward Snowden
https://blog.torproject.org/blog/what-tor-supporter-looks-edward-snowden

(6) Research suggests the dark web is not as dark as we think
http://www.htxt.co.za/2016/11/02/research-suggests-the-dark-web-is-not-as-dark-as-we-think/

(7) Study claims more than 80% of 'dark net' traffic is to child abuse sites
https://www.theguardian.com/technology/2014/dec/31/dark-web-traffic-child-abuse-sites

(8) "End Of The Silk Road: FBI Says It's Busted The Web's Biggest Anonymous Drug Black Market"
https://www.forbes.com/sites/andygreenberg/2013/10/02/end-of-the-silk-road-fbi-busts-the-webs-biggest-anonymous-drug-black-market/

Tags: 

Everything you need to know about Ethernet and computer cabling

Guru 42 Universe -

The concepts of Ethernet and computer network cabling are so full of buzzwords and geek speak. We wanted to break down the jargon into bite sized chunks to help you understand the concepts. 

Everything in computer networking starts at the physical layer, that's where the wires plug into the boxes with blinking lights. Because Ethernet deals with wires at the physical layer, at times Ethernet becomes a generic word for any type of wire associated with a computer network.

We created this section of business success beyond the technology buzzwords at the Guru 42 Universe based on conversations we had with business professionals as well as technology professionals. In discussing technology from the perspective of a business owner or business manager we realize you don't have time to become a network engineer, but we also understand your frustration in understanding all the buzzwords. With those thoughts in mind we created this introductory page on defining the term Ethernet and explaining computer network cabling.

In designing ComputerGuru we break down the topics from the perspective audience of the person asking the questions. At our ComputerGuru site we have the section, Common technology questions and basic computer concepts, which is aimed at the typical home computer user.  

Even a non technical casual user of a personal computer has probably heard of the term Ethernet from time to time.  Likewise, the typical computer user has probably misplaced a piece of wire used to connect their computer and went off in search of a network cable.  As an introduction to Ethernet and computer network cabling we have created the following pages: Ethernet computer network cable frequently asked questions answered and Computer network modular connectors and telephone registered jacks.

The strict technical definition of Ethernet is a physical and data link layer technology for local area networks (LANs). If you want to dig deeper into the technology, in our section targeted to learning computer networking technology we have the section, Basic network concepts and the OSI model explained in simple terms.  In that section The Physical Layer of the OSI model discusses the more technical terms of data communications.  The concept of Ethernet is more than just defining wires and connections, and that is discussed as part of the The Data Link Layer of the OSI model.

Any topics need covered? Any questions missing?

Are there any buzzwords bothering you?  Something else you would like us to cover here at the Guru 42 Universe?  Let us know: Guru 42 on Twitter -|- Guru 42 on Facebook -|- Guru 42 on Google+ -|- Tom Peracchio on Google  

Tags: 

Computer network modular connectors and telephone registered jacks

ComputerGuru -

The plastic plugs on the ends of telephone wiring and computer cables are defined by various technical standards. Because these standards are full of technical definitions and acronyms, it is easy to see how street slang becomes the accepted definition for many of the plastic plugs.

It is important to understand that connecting devices together is more than just matching up connector ends on a piece of wire. Just because you can find an adapter to make your cable fit into a connection is no guarantee that the device will communicate on your network. Some connectors that look exactly alike could have different wiring configuration.

In the world of technology street slang, or common buzzwords, often become the accepted the description of something rather than the specific technology standard. For example describing Ethernet patch cables as using RJ45 connectors illustrates one of the most mis-used terms in the world of technology.

We will do our best to break down some of the buzzwords and jargon to help you understand the differences in the terms.

Modular connectors

A modular connector is an electrical connector that was originally designed for use in telephone wiring, but has since been used for many other purposes. Many applications that originally used a bulkier, more expensive connector have converted to modular connectors. Probably the most well known applications of modular connectors are for telephone jacks and for Ethernet jacks, both of which are nearly always modular connectors.

Modular connectors are designated with two numbers that represent the quantity of positions and contacts, for example the 8P8C modular plug represents a plug with having eight positions and eight contacts.

Do not assume that connectors that look the same are wired the same. Contact assignments, or pin outs, vary by application. Telephone network connections are standardized by registered jack numbers, and Ethernet over twisted pair is specified by the TIA/EIA-568 standard.

Telephone industry Registered Jack

A Registered Jack (RJ) is a wiring standard for connecting voice and data equipment to a service provided by a telephone company. In some wiring definitions you will see references to the Local Exchange Carrier (LEC), which is a regulatory term in telecommunications for the local telephone company.

Registration interfaces were created by the Bell System under a 1976 Federal Communications Commission (FCC) order for the standard interconnection between telephone company equipment and customer premises equipment. They were defined in Part 68 of the FCC rules (47 C.F.R. Part 68) governing the direct connection of Terminal Equipment (TE) to the Public Switched Telephone Network (PSTN).

Connectors using the distinction Registered Jack (RJ) describe a standardized telecommunication network interface. The RJ designations only pertain to the wiring of the jack, it is common, but not strictly correct, to refer to an unwired plug by any of these names.

For example, RJ11 is a standardized jack using a 6P2C (6 position 2 contact) modular connectors, commonly used for single line telephone systems. You will often see telephone cables with four wires used for common analog telephone referred to as RJ11 cables. Technically speaking RJ14 is a configuration for two lines using a six-position four-conductor (6P4C) modular jack

RJ45 is a standard jack once specified for modem or data interfaces using a mechanically-keyed variation of the 8P8C (8 position 8 contact) body. Although commonly referred to as an RJ45 in the context of Ethernet and category 5 cables, it is incorrect to refer to a generic 8P8C connector as an RJ45.

Why is a Ethernet eight-pin modular connector (8P8C) not an RJ45?

Both twisted pair cabling used for Ethernet and the telecommunications RJ45 use the 8P8C (Eight Position, Eight Contact) connector, and there lies the confusion and the misuse of the terms. The 8P8C modular connector is often called RJ45 after a telephone industry standard. Although commonly referred to as an RJ45 in the context of Ethernet and Category 5 cables, it is incorrect to refer to a generic 8P8C connector as an RJ45

The 8P8C modular connector is often called RJ45 after a telephone industry standard defined in FCC Part 68. The Ethernet standard is different from the telephone standard, TIA-568 is a set of telecommunications standards from the Telecommunications Industry Association (TIA). Standards T568A and T568B are the pin - pair assignments for eight-conductor 100-ohm balanced twisted pair cabling to 8P8C (8 position 8 contact) modular connectors.

How does a RJ45 to RJ11 converter work?

There is no such thing as a RJ45 to RJ11 converter. They are two different types of connectors for two totally different standards of communication. Cables with various pin configurations and wire pairs are created for specific purposes. Be careful when looking to "convert" on type of wire into another. An adapter that allows you to connect an RJ11 plug into an RJ45 plug is not converting anything.

Technically speaking neither RJ11 or RJ45 is a computer networking standard. Many times when people are looking to convert between RJ11 and RJ45 they are dealing with a device made for a two wire phone line and trying to connect it to an Ethernet eight-pin (8P8C) unshielded twisted-pair (UTP) modular connectors.

I see many questions on internet forums asking about various adapters and converters. Just because you can convert a plug from one type to another does not mean that the signal traveling along the wire will work as you expect. I can not stress enough the importance of not using any type of adapters and converters without knowing the exact wiring configuration of the devices you are trying to connect.

Tags: 

Ethernet computer network cable frequently asked questions answered

ComputerGuru -

You will often hear a common computer network patch cable called an "Ethernet cable." While most modern local area networks (LAN) use the same type of cable, the term Ethernet is a family of computer networking technologies that defines how the information flows through the wire, but does not define the physical network cable.

The standards defining the physical layer of wired Ethernet are known as IEEE 802.3, which is part of a larger set of standards by the Institute of Electrical and Electronics Engineers Standards Association.

Cable types, connector types and cabling topologies are defined by TIA/EIA-568, a set of telecommunications standards from the Telecommunications Industry Association (TIA). The standards address commercial building cabling for telecommunications products and services.

Computer network cabling

Twisted Pair Cabling is a common form of wiring in which two conductors are wound around each other for the purposes of canceling out electromagnetic interference which can cause crosstalk. The number of twists per meter make up part of the specification for a given type of cable.

The two major types of twisted-pair cabling are unshielded twisted-pair (UTP) and shielded twisted-pair (STP). In shielded twisted-pair (STP) the inner wires are encased in a sheath of foil or braided wire mesh. Unshielded twisted pair (UTP) cable is the most common cable used in modern computer networking.

What does Cat5 Cable mean?

A Category 5 cable (Cat5 cable) is made up of four twisted-pair wires, certified to transmit data up to 100 Mbps. Category 5 cable is used extensively in Ethernet connections in local networks, as well as telephony and other data transmissions.

Cat5 Cable has been the standard for homes and small offices for many years. As technology for twisted pair copper cabling has progressed, successive categories have given buyers more choices. Category 5e and Category 6 cable offer more potential for bandwidth and better potential handling of signal noise or loss. Newer cable types also help to deal with the issue of cross talk or signal bleeding, which can be problems with unshielded twisted pair cabling.

The category 5e specification improves upon the category 5 specification by revising and introducing new specifications to further mitigate the amount of crosstalk.The bandwidth (100 MHz) and physical construction are the same between the two.

The category 6 specification improves upon the category 5e specification by improving frequency response and further reducing crosstalk. The improved performance of Cat 6 provides 250 MHz bandwidth and supports 10GBASE-T (10-Gigabit Ethernet). The Cat 6 cable is fully backward compatible with previous versions, such as the Category 5/5e

Older versions of voice and data cable

Category 1 Traditional UTP telephone cable can transmit voice signals but not data. Most telephone cable installed prior to 1983 is Category 1. Category 2 UTP cable is made up of four twisted-pair wires, certified for transmitting data up to 4 Mbps. Official TIA/EIA-568 standards have only been established for cables of Category 3 ratings or above.

Category 3 was widely used in computer networking in the early 1990s for 10BASE-T. In many common names for Ethernet standards the leading number (10 in 10BASE-T) refers to the transmission speed in Mbit/s. BASE denotes that baseband transmission is used. The T designates twisted pair cable.

Category 4 cable consists of four unshielded twisted-pair (UTP) copper wires used in telephone networks which can transmit voice and data up to 16 Mbit/s. Category 4 cable is not recognized by the current version of the TIA/EIA-568 data cabling standards.

What does Patch Cable mean?

A patch cord, also called a patch cable, is a length of cable with connectors on each end that is used to connect one electronic device to another. In computer networking what people often call an “Ethernet Cable” is Unshielded Twisted-Pair (UTP) patch cable.

What does Straight-Through Cable mean?

A straight-through cable is a standard patch cable used in local area networks. Straight-through cables have the wired pins on one end match on the other end. In other words, pin 1 on one end is connected to pin 1 on the other end, and the order follows the straight through route from pin 1 through pin 8.

What is a Crossover Cable?

A crossover cable is used for the interconnection of two similar devices. It is enabled by reversing the transmission and receiving pins at both ends, so that output from one computer becomes input to the other, and vice versa. The reversing or swapping of cables varies, depending on the different network environments and devices in use.

This type of cable is also sometimes called a and is an alternative to wireless connections where one or more computers access a router through a wireless signal. Use a straight-through cable when connecting a router to a hub, a computer to a switch, or connecting a LAN port to a switch, hub, or computer.

Why do you need a crossover cable?

A traditional port found in a computer NIC (network interface card) is called a media-dependent interface (MDI). A a traditional port found on an Ethernet switch is called a media-dependent interface crossover (MDIX), which reverses the transmit and receive pairs. However, if you want to interconnect two switches, where both switch ports used for the interconnection were MDIX ports, the cable would need to be a crossover cable.

Introduced in 1998, Auto MDI-X made the distinction between uplink and normal ports and manual selector switches on older hubs and switches obsolete. Auto MDI-X automatically detects the required cable connection type and configures the connection appropriately, removing the need for crossover cables.

Gigabit and faster Ethernet links over twisted pair cable use all four cable pairs for simultaneous transmission in both directions. For this reason, there are no dedicated transmit and receive pairs, and consequently, crossover cables are never required.

Tags: 

Installing Linux defining distros which version should you choose

ComputerGuru -

In April 1991, Linus Torvalds, at the time a 21 year old computer science student at the University of Helsinki, Finland, started working on some simple ideas for an operating system. Although the desktop computer market exploded throughout the 1990s, the Linux Operating System remained pretty much the domain of geeks who like to build their own computers. I really believed that more than 20 years later we would have Linux computers in our home as common as Windows or Apple varieties.

The only dent in the domination of Windows or Apple desktop computers in recent years has been the introduction of the Chromebook as a personal computer in 2011. The Chrome operating system is a strange mix of the Linux kernel and using the Google Chrome web browser as a user interface.

The Linux operating system has come a long way since the mid 1990s. From painful experiences with using floppy disks and hunting down hardware drivers, my experiences with installing many distributions of Linux in recent years has been pretty painless.

The Linux kernel

Just as I did with answering the question, "what is the best desktop computer operating system," I am going to generalize a bit here so we don't get too deep into the geek speak. Hopefully the tech purists won't beat me up too much for generalizing. Let's begin with quickly going over the basic definitions.

Think of the Linux kernel as an automobile engine and drive train that was designed by a community. Once the engine and drive train have been developed there are groups that split off and design their own version of an automobile. Each of these automotive design groups have their own community with goals for how they want to use their finished product, some may focus on style and looks, another group may want to focus on being practical and functional. Once the group has a general purpose in mind, they will form an online community where they can share ideas in creating a finished product.

The Linux Distro

Each customized version of Linux that adds additional modules and applications is supported by an online community offering internet downloads as well as support. You will see the question phrased as which Linux distro should you use. Distro is a shortened version of the term distribution. There are many distros of the Linux family all based on the same Linux kernel, the core of the computer operating system. There are geeks who swear by which is the best Linux distro, but in the end it is a matter of what works best for you.

When it comes to comparing the various distributions, I find "the big three" to be very similar, because in reality they are variations of the same family. As of the time of this update, March 2017, based on various statistics the most popular version of Linux is Mint, with Debian coming in second, followed by Ubuntu. Mint is a fork from Ubuntu, which is itself a fork from Debian. Mint is very similar indeed to Ubuntu. Mint was forked off Ubuntu with the goal of providing a familiar desktop graphical user interface.

First answer the question, why are you looking at Linux? Do you have an old computer with an outdated operating system that you are looking to upgrade? Or perhaps you just want to see what all the fuss is about with the "free" alternative to Windows or Apple?

If you simply want to play with Linux and just want to see what all the fuss is about, Mint is a very easy place to start. I have installed Mint on a few old computers with no issues. One of the biggest issues I have experienced with many versions of Linux is the lack of drivers for certain pieces of hardware in some laptop models. There's a few old Dell laptops I moved on from installing Linux because finding drivers for the Wi-Fi was not worth the effort.

Here's a look at various distributions of Linux.

In our previous question on "what is the best desktop computer operating system" we addressed the topic of the "free" alternative to Windows or Apple as we explained Open Source software. Richard Stallman, the father of the Open Source software movement, explains that Open Source refers to the preservation of the freedoms to use, study, distribute and modify that software, not zero-cost. In illustrating the concept of Gratis versus Libre, Stallman is famous for using the sentence, "free as in free speech not as in free beer." Even though Linux is open source there are versions that are commercially distributed and supported.

Fedora - Red Hat

Red Hat Commercial Linux, introduced in 1995, was one of the first commercially supported versions of Linux, and entered into the enterprise network environment because of its support. Red Hat Linux has evolved quite a bit over the years as Red Hat Linux merged with the community based Fedora Project in 2003.

Fedora is now the free community supported home version of Red Hat Linux. Fedora ranks slightly behind the other distros we mention here in popularity, Fedora is often at the top of list when it comes to integrating new package versions and technologies into the distribution. Many users in the enterprise environment rave about the stability of Fedora.

SUSE - openSUSE

openSUSE claims to be "the makers' choice for sysadmins, developers and desktop users." You may not find a lot of neighborhood geeks telling you to try openSUSE but it ranks near the top of many charts as far as popularity. SUSE was marketing Linux to the enterprise market in 1992, before Red Hat. Many American geeks are not as familiar with SUSE because it was developed in Germany. I have not had any issues with installing it. You can always download a "live CD" which allows you to run the operating system off of the CD without having to install it

openSUSE is the open source version. SUSE is often used in commercial environments because professional help is available under a support contract through SUSE Linux. Having worked as a Novell Netware systems administrator I was involved with SUSE Linux as the Novell Netware network operating system was coming to the end of its life when Novell bought the SUSE brands and trademarks in 2003. When Novell was purchsed by The Attachmate Group in 2011, SUSE was spun off as an independent business unit. SUSE is geared for the business environment with SUSE Linux Enterprise Server and SUSE Linux Enterprise Desktop. Each focuses on packages that fit its specific purpose.

Debian - Ubuntu - Mint

Ubuntu and Mint are Debian-based: their package manager is APT (The Advanced Package Tool) a free software user interface that works with core libraries to handle the installation and removal of software on the Debian Linux distributions. Their packages follow the DEB (Debian) package format.

Ubuntu is often used in commercial environments because professional help is available under a support contract through Canonical, the company behind Ubuntu.

Mint is basically the same OS as Debian or Ubuntu with a different default configuration with a lot of pre-installed applications and a nice looking desktop. Mint was forked off from the Ubuntu community with the goal of providing a familiar desktop Operating System.  If you are looking for something to use as a server Debian or Ubuntu may be a better choice.


What about all the rest?

There are more that 200 different versions of Linux. Once you go beyond the versions mentioned here you are getting into support issues. With each of the three families of Linux we mention here, there is a commercially supported version and a community supported version. Keep in mind, if you are not buying support through one of the commercial versions mentioned here, each of these families have a well established online community for support of the open source version.

Is it time to switch to Linux?

Back in the late 1990s I was taking a community college course on Novell networking and systems administration using Novell Netware. As part of the curriculum we had to write a term paper on a unrelated technology topic, I chose Linux on the desktop. I concluded that I was impressed with Linux as an operating system, but it would not become mainstream desktop operating system until there were hardware companies embracing it and selling home computers with Linux installed. Twenty years later, that really has not happened.

You could make the case that the Google Chromebook is a version of Linux installed and configured along with a computer, but the Google Chromebook has not become a mainstream home computer. If all you want to do is surf the net, interact on social media, and read your email, a Google Chromebook works fine. But beyond that there are many issues.

Hardware drivers and website plugins can be a problem when using any version of Linux. Many manufacturers don't develop Linux device drivers for their hardware, you need to search them out yourself through your LInux community. Using many websites that need Digital Rights Management, like Amazon Video, Netflix, or Sling, getting your streaming to work on Linux can be difficult. Some websites don't understand Linux as an operating system and automatic installs of plugins fail.

I know I said at the beginning of this discussion that in recent years my experinece in installing Linux has been pretty painless, but I have access to name brand hardware on pretty basic computers.  The problem with hardware drivers and browser plug ins keeps improving, but beware it can be an issue at times.  It is still a concern that can turn your Linux experince sour. The biggest problem I have experienced in experimenting with Linux is network card and WiFi drivers in laptop models.

In our last article we discussed why is Microsoft Windows so popular. Whether you love them or hate them, many applications only have a Windows version. There are many websites that offer "open source equivalents” to your favorite applications. Some equivalents work well, others are very buggy. The key to using any open source application is looking at how active is the community that supports them. Be cautious of applications that look cool and work well, but are basically created and supported by a single individual. They can often become unsupported as developer creates an application and moves on without supporting it over time.

Take Linux for a test drive

Look for a live distribution of Linux that allows you to run a full instance of the operating system from either CD, DVD, or USB, without making changes to your current system. Many install downloads will offer you a live test drive of the distro that does not install anything to your hard drive. If everything works well from a live test drive, you can feel a bit more comfortable about doing the "real" install.

Tags: 

Desktop personal computer system basic parts defined

ComputerGuru -

If you are studying personal computers as the beginning of your career in technology, or perhaps you are just trying to understand how things work on your home computer to better deal with problems and upgrades, you can't get away with not knowing some very basic definitions of the components of a desktop personal computer system.

Computer hardware is the collection of physical elements that make up a computer system such as a hard disk drive (HDD), monitor, mouse, keyboard, CD-ROM drive, network card, system board, power supply, case, and video card.

The main system board is sometimes called the motherboard. It is the central printed circuit board (PCB) in and holds many of the crucial components of the system, providing connectors for other peripherals.

The central processing unit (CPU), the brain of a computer system is the main component on the main system board. The CPU carries out the instructions of computer programs, performs the basic arithmetical, logical, and input/output operations of the system.

System boards will have expansion slots, a CPU socket or slot, location for memory cache and RAM, and a keyboard connector. Other components may also be present. A slot is a narrow notch, groove, or opening. A socket is a hollow piece or part into which something fits. Systemboards contain both sockets and slots, which are the points at which devices can be plugged in. A CPU slot is long and narrow while a CPU socket is square.

RAM (Random Access Memory), is the computer's primary storage which holds programming code and data that is being processed by the CPU.

A hard disk drive (HDD) is called secondary storage while memory is called primary storage because programs cannot be executed from secondary storage but must first be moved to primary storage. Basically, the CPU cannot "reach" the program still in secondary storage for execution.

ROM is read-only memory. ROM chips, located on circuit boards, are used to hold programming code that is permanently stored on the chip.

Flash ROM can be reprogrammed whereas regular ROM cannot be. In order to change the programming code of regular ROM, the chip must be replaced. Upgrades to Flash ROM can be downloaded from the Internet.

BIOS stands for basic input-output system. It is used to manage the startup of the computer and ongoing input and output operations of basic components, such as a floppy disk or hard drive.

Computer software is a collection of computer programs and related data that provide the instructions for telling a computer what to do.

System software provides the basic functions for computer usage and helps run the computer hardware. An operating system is a type of software that controls a computers output and input operations, such as saving files and managing memory. Common operating systems are typically Windows based, but personal computers can also use an Apple or Linux based operating system as well.

Application software is computer software designed to perform specific tasks. Common applications include word processing such as OpenOffice.org Writer, a spread sheet such as Microsoft Excel, and business accounting such as Quick Books by Intuit.

What is the difference between a PC (personal computer) and a workstation

In a business environment you may have a computer on your desk that is very similar to the computer you have at home, but there is one major difference, the work computer is managed as part of a LAN (local area network) that contains many other computers. In the next section we define networking terms and go into a bit more detail on the concept of a LAN.

Some definitions will state that a workstation computer is faster and more powerful than a personal computer. Not necessarily. Terms like "faster and more powerful" are pretty ambiguous. The difference is a bit more clear-cut, it is a point of reference in how they are used.

In your home you have a personal computer, it is the center of your personal technology universe. When you open up an application, it is on that computer. When you create a data file, like a Word document, you save it to that computer.

When you open up an application, it may be installed on your local computer, or it may be installed on an application server somewhere on your LAN. When you create a data file on your workstation, like a Word document, you save it to your personal directory on a file server that is on your LAN.

Many years ago when computer systems were expensive, all the work was done on a mainframe, a huge computer surrounded by geeks in a special room. The end users had dumb terminals, meaning there was a keyboard and a monitor at your desk, but the box they attached to on your desk was called a dumb terminal because it did not do any work, it was dumb!

The concept of the workstation is that some of the "work" is done locally at your desktop, but some of the work could also be done on a computer somewhere else, in the case of the LAN, that somewhere else would be a server.

Tags: 

The Data Link Layer of the OSI model

ComputerGuru -

The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking.  The Data Link layer deals with issues on a single segment of the network.

Layer two of the OSI model is one area where the difference between the theoretical OSI reference model and the implementation of TCP/IP with the competing Department of Defense (DoD) model. As we will discuss with the implementation of TCP/IP there is one lower layer called the network interface layer that encompasses Ethernet.

The IEEE 802 standards map to the lower two layers (Data Link and Physical) of the seven-layer OSI networking reference model. Even though we discussed many of these Ethernet terms in discussing the Physical Layer of the OSI model, we also discuss them here in the context of the Data Link Layer.

The IEEE 802 LAN/MAN Standards Committee develops Local Area Network standards and Metropolitan Area Network standards. In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). IEEE 802 splits the OSI Data Link Layer into two sub-layers named Logical Link Control (LLC) and Media Access Control (MAC).

The lower sub-layer of the Data Link layer, the Media Access Control (MAC), performs Data Link layer functions related to the Physical layer, such as controlling access and encoding data into a valid signaling format.

The upper sub-layer of the Data Link layer, the Logical Link Control (LLC), performs Data Link layer functions related to the Network layer, such as providing and maintaining the link to the network.

The MAC and LLC sub-layers work in tandem to create a complete frame. The portion of the frame for which LLC is responsible is called a Protocol Data Unit (LLC PDU or PDU).

IEEE 802.2 defines the Logical Link Control (LLC) standard that performs functions in the upper portion of the Data Link layer, such as flow control and management of connection errors.

LLC supports the following three types of connections for transmitting data:
• Unacknowledged connectionless service:does not perform reliability checks or maintain a connection, very fast, most commonly used
• Connection oriented service. Once the connection is established, blocks of data can be transferred between nodes until one of the nodes terminates the connection.
• Acknowledged connectionless service provides a mechanism through which individual frames can be acknowledged

IEEE 802.3 is an extension of the original Ethernet. includes modifications to the classic Ethernet data packet structure.

The Media Access Control (MAC) sub-layer contains methods that logical topologies can use to regulate the timing of data signals and eliminate collisions.

The MAC address concerns a device's actual physical address, which is usually designated by the hardware manufacturer. Every device on the network must have a unique MAC address to ensure proper transmission and reception of data.  MAC communicates with adapter card.

Carrier Sense Multiple Access / Collision Detection is (CSMA/CD) a set of rules determining how network devices respond when two devices attempt to use a data channel simultaneously (called a collision). Standard Ethernet networks use CSMA/CD. This standard enables devices to detect a  collision.

After detecting a collision, a device waits a random delay time and then attempts to re-transmit the message. If the device detects a collision again, it waits twice as long to try to re-transmit the message. This is known as exponential back off.

IEEE 802.5 uses token passing to control access to the medium. IBM Token Ring is essentially a subset of IEEE 802.5.

The IEEE 802.11 specifications are wireless standards that specify an "over-the-air" interface between a wireless client and a base station or access point, as well as among wireless clients. The 802.11 standards can be compared to the IEEE 802.3™ standard for Ethernet for wired LANs. The IEEE 802.11 specifications address both the Physical (PHY) and Media Access Control (MAC) layers and are tailored to resolve compatibility issues between manufacturers of Wireless LAN equipment

The IEEE 802.15 Working Group provides, in the IEEE 802 family, standards for low-complexity and low-power consumption wireless connectivity.

IEEE 802.16 specifications support the development of fixed broadband wireless access systems to enable rapid worldwide deployment of innovative, cost-effective and interoperable multi-vendor broadband wireless access products.

A network interface controller (NIC), also known as a network interface card or network adapter, implements communications using a specific physical layer and data link layer standard such as Ethernet. The 1990s Ethernet network interface controller shown in the photo has a BNC connector (left) and an 8P8C connector (right).

 

Tags: 

Physical Layer Topology in computer networking

ComputerGuru -

A network topology refers to the layout of the transmission medium and devices on a network. As a networking professional for many years I can honestly say about the only time network topology has come up is for certification testing. Here are some basic definitions.

Physical Topology:

Physical topology defines the cable's actual physical configuration (star, bus, mesh, ring, cellular, hybrid).

Bus: Uses a single main bus cable, sometimes called a backbone, to transmit data. Workstations and other network devices tap directly into the backbone by using drop cables that are connected to the backbone.  This topology is an old one and essentially has each of the computers on the network daisy-chained to each other. This type of network is usually peer to peer and uses Thinnet(10base2) cabling. It is configured by connecting a "T-connector" to the network adapter and then connecting cables to the T-connectors on the computers on the right and left. At both ends of the chain the network must be terminated with a 50 ohm impedance terminator.

Advantages: Cheap, simple to set up.
Disadvantages Excess network traffic, a failure may affect many users, Problems are difficult to troubleshoot.

Star: Branches out via drop cables from a central hub (also called a multiport repeater or concentrator) to each workstation. A signal is transmitted from a workstation up the drop cable to the hub. The hub then transmits the signal to other networked workstations.  The star is probably the most commonly used topology today. It uses twisted pair such as 10baseT or 100baseT cabling and requires that all devices are connected to a hub.

Advantages: centralized monitoring, failures do not affect others unless it is the hub, easy to modify.
Disadvantages If the hub fails then everything connected to it is down.

Ring: Connects workstations in a continuous loop. Workstations relay signals around the loop in round-robin fashion.  The ring topology looks the same as the star, except that it uses special hubs and ethernet adapters. The Ring topology is used with Token Ring networks, (a proprietary IBM System).

Advantages: Equal access.
Disadvantages Difficult to troubleshoot, network changes affect many users, failure affects many users.

Mesh: Provides each device with a point-to-point connection to every other device in the network.  Mesh topologies are combinations of the above and are common on very large networks. For example, a star bus network has hubs connected in a row(like a bus network) and has computers connected to each hub.

Cellular: Refers to a geographic area, divided into cells, combining a wireless structure with point-to-point and multipoint design for device attachment.

Logical Topology:

Logical topology defines the network path that a signal follows (ring or bus), regardless of its physical design.

Ring: Generates and sends the signal on a one-way path, usually counterclockwise.

Bus: Generates and sends the signal to all network devices.


LAN Media-Access Methods

Media contention occurs when two or more network devices have data to send at the same time. Because multiple devices cannot talk on the network simultaneously, some type of method must be used to allow one device access to the network media at a time. This is done in two main ways: carrier sense multiple access collision detect (CSMA/CD) and token passing.

In token-passing networks such as Token Ring and FDDI, a special network frame called a token is passed around the network from device to device.

For CSMA/CD networks, switches segment the network into multiple collision domains
 

Tags: 

The Internet Family of Protocols The TCP/IP protocol suite

ComputerGuru -

The Internet protocol suite commonly known as TCP/IP is a set of communications protocols used for the Internet and similar networks. TCP/IP is not a single protocol, but rather an entire family of protocols.

The network concept of protocols establishes a set of rules for each system to speak the others language in order for them to communicate. Protocols describe both the format that a message must take as well as the way in which messages are exchanged between computers.

Transmission Control Protocol (TCP) and the Internet Protocol (IP), were the first two members of the family to be defined, consider them the parents of the family. Protocol stack describes a layered set of protocols working together to provide a set of network functions. Each protocol/layer services the layer above by using the layer below.


Internet Protocol (IP)

Internet Protocol (IP) envelopes and addresses the data, enables the network to read the envelope and forward the data to its destination and defines how much data can fit in a single packet. IP is responsible for routing of packets between computers.

Internet Protocol (IP) is a connectionless, unreliable datagram protocol, which means that a session is not created before sending data. An IP packet might be lost, delivered out of sequence, duplicated, or delayed. IP does not attempt to recover from these types of errors. The acknowledgment of packets delivered and the recovery of lost packets is the responsibility of a higher-layer protocol, such as TCP.

An IP packet, also known as an IP datagram, consists of an IP header and an IP payload. The IP header contains the following fields for addressing and routing: IP header field, Source IP address of the original source of the IP datagram, and the Destination IP address of the final destination of the IP datagram.

Time-to-Live (TTL) Designates the number of network segments on which the datagram is allowed to travel before being discarded by a router. The TTL is set by the sending host and is used to prevent packets from endlessly circulating on an IP internetwork. When forwarding an IP packet, routers are required to decrease the TTL by at least 1.

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) breaks data up into packets that the network can handle efficiently, verifies that all the packets arrive at their destination, and reassembles the data. TCP is based on point-to-point communication between two network hosts. TCP receives data from programs and processes this data as a stream of bytes. Bytes are grouped into segments that TCP then numbers and sequences for delivery.

Transmission Control Protocol (TCP) is connection oriented, which means an acknowledgment (ACK) verifies that the host has received each segment of the message, reliable delivery service. Acknowledgments are sent by receiving computer, unacknowledged packets are resent. Sequence number are used with acknowledgments to track successful packet transfer

Before two TCP hosts can exchange data, they must first establish a session with each other. A TCP session is initialized through a process known as a three-way handshake. This process synchronizes sequence numbers and provides control information that is needed to establish a virtual connection between both hosts.

Once the initial three-way handshake completes, segments are sent and acknowledged in a sequential manner between both the sending and receiving hosts. A similar handshake process is used by TCP before closing a connection to verify that both hosts are finished sending and receiving all data.

TCP ports use a specific program port for delivery of data sent by using Transmission Control Protocol (TCP). TCP ports are more complex and operate differently from UDP ports.

While a UDP port operates as a single message queue and the network endpoint for UDP-based communication, the final endpoint for all TCP communication is a unique connection. Each TCP connection is uniquely identified by dual endpoints.

Comparison between the OSI and TCP/IP Models

TCP/IP Model Layer 4. Application Layer

Application layer is the top most layer of four layer TCP/IP model. Application layer is present on the top of the Transport layer. Application layer defines TCP/IP application protocols and how host programs interface with Transport layer services to use the network.

Application layer includes all the higher-level protocols:

  • DNS (Domain Naming System)
  • HTTP (Hypertext Transfer Protocol) is the protocol used to transport web pages.
  • FTP (File Transfer Protocol) used to upload and download files.
  • TFTP (Trivial File Transfer Protocol) used to upload and download files.
  • SNMP (Simple Network Management Protocol) designed to enable the analysis and troubleshooting of network hardware. For example, SNMP enables you to monitor workstations, servers, minicomputers, and mainframes, as well as connectivity devices such as bridges, routers, gateways, and wiring concentrators.
  • SMTP (Simple Mail Transfer Protocol) used for transferring email across the internet
  • DHCP (Dynamic Host Configuration Protocol) used to centrally administer the assignment of IP addresses, as well as other configuration information such as subnet masks and the address of the default gateway. When you use DHCP on a TCP/IP network, IP addresses are assigned to clients dynamically instead of manually.
  • X Windows, Telnet, SSH, RDP (Remote Desktop Protocol)
     

TCP/IP Model Layer 3. Transport Layer

Transport Layer is the third layer of the four layer TCP/IP model. The position of the Transport layer is between Application layer and Internet layer. The purpose of Transport layer is to permit devices on the source and destination hosts to carry on a conversation. Transport layer defines the level of service and status of the connection used when transporting data.

The main protocols included at Transport layer are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

TCP/IP Model Layer 2. Internet Layer

Internet Layer is the second layer of the four layer TCP/IP model. The position of Internet layer is between Network Access Layer and Transport layer. Internet layer pack data into data packets known as IP datagrams, which contain source and destination address (logical address or IP address) information that is used to forward the datagrams between hosts and across networks. The Internet layer is also responsible for routing of IP datagrams.

Packet switching network depends upon a connectionless internetwork layer. This layer is known as Internet layer. Its job is to allow hosts to insert packets into any network and have them to deliver independently to the destination. At the destination side data packets may appear in a different order than they were sent. It is the job of the higher layers to rearrange them in order to deliver them to proper network applications operating at the Application layer.

The main protocols included at Internet layer are IP (Internet Protocol), ICMP (Internet Control Message Protocol), ARP (Address Resolution Protocol), RARP (Reverse Address Resolution Protocol) and IGMP (Internet Group Management Protocol).

Reverse Address Resolution Protocol (RARP) adapted from the ARP protocol and provides reverse functionality. It determines a software address from a hardware (or MAC) address. A diskless workstation uses this protocol during bootup to determine its IP address.

Address Resolution Protocol (ARP) translates a host's software address to a hardware (or MAC) address (the node address that is set on the network interface card).

Internet Control Message Protocol (ICMP) enables systems on a TCP/IP network to share status and error information such as with the use of PING and TRACERT utilities.

TCP/IP Model Layer 1. Network Access Layer

Network Access Layer is the first layer of the four layer TCP/IP model. Network Access Layer defines details of how data is physically sent through the network, including how bits are electrically or optically signaled by hardware devices that interface directly with a network medium, such as coaxial cable, optical fiber, or twisted pair copper wire.

The protocols included in Network Access Layer are Ethernet, Token Ring, FDDI, X.25, Frame Relay etc.

The most popular LAN architecture among those listed above is Ethernet. Ethernet uses an Access Method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection) to access the media, when Ethernet operates in a shared media. An Access Method determines how a host will place data on the medium.

IN CSMA/CD Access Method, every host has equal access to the medium and can place data on the wire when the wire is free from network traffic. When a host wants to place data on the wire, it will check the wire to find whether another host is already using the medium. If there is traffic already in the medium, the host will wait and if there is no traffic, it will place the data in the medium. But, if two systems place data on the medium at the same instance, they will collide with each other, destroying the data. If the data is destroyed during transmission, the data will need to be retransmitted. After collision, each host will wait for a small interval of time and again the data will be retransmitted.
 

Tags: 

The Physical Layer of the OSI model

ComputerGuru -

The Physical Layer consists of the basic hardware transmission technologies of a network sometime referred to as the physical media. Physical media provides the electro-mechanical interface through which data moves among devices on the network.

Initially physical media is thought of as some sort of wire. As technology progresses the types of media grows.

Bounded media transmits signals by sending electricity or light over a cable. Unbounded media transmits data without the benefit of a conduit-it might transmit data through open air, water, or even a vacuum. Simply put, media is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or radio spectrum technology.

Data communications definitions:

Public Switched Telephone Network (PSTN), also referred to as Plain Old Telephone Service (POTS), connections run over the standard copper phone lines found in most homes

Integrated Services Digital Network (ISDN) uses a single wire or fiber optic line to carry voice, data, and video signals.

In the early days of connecting your computer to the internet most folks had Public Switched Telephone Network (PSTN), also referred to as Plain Old Telephone Service (POTS), and all connections were run over the standard copper phone lines. In order for the digital world of computers to talk over analog phone lines you needed to use a MODEM.

The term MODEM comes from the words modulator and demodulator, it is a device that modulates a carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data.

Modem standards, or V dot modem standards, are defined by the ITU (International Telecommunications Union). The FCC has limited the speed of analog transmissions to 53 Kbps

Basic Rate Interface (BRI) is most commonly used in residential ISDN connections. It's composed of two bearer (B) channels at 64 Kbps each for a total of 128 Kbps (used for voice and data) and one delta (D) channel at 16 Kbps (used for controlling the B channels and signal transmission). The total bandwidth is up to 144 Kbps.

Primary Rate Interface (PRI) is most commonly used between a PBX (Private Branch Exchange) at the customer's site and the central office of the phone company. It is composed of 23 B channels at 64 Kbps and one D channel at 64 Kbps. The total bandwidth is up to 1,536 Kbps.

Digital Subscriber Line (DSL) technologies use existing, regular copper phone lines to transmit data. DSL hardware can transmit data using three channels over the same wire. In a typical set up, a user connected through a DSL hookup can send data at 640 Kbps, receive data at 1.5 Mbps, and still carry on a standard phone conversion over one line.

T-Carrier Technology is a digital transmission service used to create point-to-point private networks and to establish direct connections to Internet Service Providers. It uses four wires, one pair to transmit and another to receive.

T-1 lines support data transfer at rates of 1.544 megabits per second. Each T-1 line contains 24 channels. The E1 line is the European counterpart that transmits data at 2.048 Mbps.

T-3 has 672 (64 Kbps) channels, for a total data rate of 44.736 Mbps. The E3 line is the European counterpart that transmits data at 34.368 Mbps.

Cable connections provide access to the Internet through the same coaxial cable that brings cable TV into your home. A signal splitter installed by the cable company isolates the Internet signals from the TV signals. The two-way cable connection is always available and can be very fast. Speeds up to 30 Mbps are claimed to be possible, although speeds in the 1 to 2 Mbps range are more typical.

The Physical Layer Ethernet Specifications

Ethernet is a family of computer networking technologies for local area (LAN) and larger networks originally developed at Xerox PARC in the 1970s. Robert Metcalfe, one of the inventors of Ethernet, left Xerox PARC in 1979 to create 3Com Corporation to focus on deploying Ethernet technology.

In 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). The IEEE 802 standards map to the lower two layers (Data Link and Physical) of the seven-layer OSI networking reference model. IEEE 802.3 is a working group and a collection of IEEE standards focusing on wired Ethernet.

Twisted-pair Ethernet cable has the following specifications: a maximum of 1,024 attached workstations, a maximum of 4 repeaters between communicating workstations, a maximum segment length of 328 feet (100 meters).

100BASE-TX specification uses two pairs of Category 5 UTP or Category 1 STP cabling at a 100 Mbps data transmission speed. Each segment can be up to 100 meters long.

100BASE-T4 specification uses four pairs of Category 3, 4, or 5 UTP cabling at a 100 Mbps data transmission speed with standard RJ-45 connectors. Each segment can be up to 100 meters long.

Fiber optic cable (IEEE 802.8) in which the center core, a glass cladding composed of varying layers of reflective glass, refracts light back into the core. Max length is 25 kilometers, speed is up to 2Gbps but very expensive. Best used for a backbone due to cost.

100BASE-FX specification uses two-strand 62.5/125 micron multi- or single-mode fiber media. Half-duplex, multi-mode fiber media has a maximum segment length of 412 meters. Full-duplex, single-mode fiber media has a maximum segment length of 10,000 meters.

Other wired LAN technologies

Ethernet has largely replaced competing wired LAN technologies such as token ring, token bus, and ARCNET.

IEEE standard 802.4 defined Token bus. It was mainly used for industrial applications. Token bus was used by General Motors for their Manufacturing Automation Protocol (MAP) standardization effort. The IEEE 802.4 Working Group is disbanded and the standard has been withdrawn

Token ring was IBM’s protocol of choice, standardized as IEEE 802.5. Introduced by IBM in 1984,Token ring was fairly successful in corporate environments, but gradually lost out to Ethernet.

ARCNET was a very early LAN system, a token-passing bus with a 2.5 Mbit/sec speed, popular in the 1980s.

Wireless standards

The standards defining the physical layer of wired Ethernet are known as IEEE 802.3, which is part of a larger set of project 802 standards by the Institute of Electrical and Electronics Engineers Standards Association.

IEEE 802.11 WLAN 802.11 and 802.11x refers to a family of specifications developed by the IEEE for wireless LAN (WLAN) technology. 802.11 specifies an over-the-air interface between a wireless client and a base station or between two wireless clients.

IEE 802.15 defines Bluetooth Bluetooth,  a wireless technology standard for exchanging data over short distances (using short-wavelength UHF radio waves in the ISM band from 2.4 to 2.485 GHz[3]) from fixed and mobile devices, and building personal area networks (PANs).

IEEE 802.16 defines WIMAX standards for broadband for wireless metropolitan area networks. officially called WirelessMAN in IEEE, it has been commercialized under the name "WiMAX"

While in your world many of the older technologies of data communications may be replaced with modern one, there are many reasons why you may need to know about them. You may get a better understanding of how things are done on your current network if you understand the evolution of the network.

If your ever work in consulting you may be surprised to find out how much of what you call obsolete is still in use. You will also find questions on older technologies on various certification tests.

Tags: 

What is the difference between the Internet and OSI reference model

ComputerGuru -

When learning computer networking it is essential to have a general idea of the different computer networking reference models and the reasoning behind the layered approach. Both the TCP/IP network model and the OSI Model create a reference model for computer networking. The OSI model is widely used to teach students as was created in the mindset of a reference book. The TCP/IP standards were created to provide guidance to people actually implementing a networking technology and was created in the mindset of a service manual. Much like the answer to the question of why was the internet created, the answer to why do we need the OSI model depends on who you ask. Here at ComputerGuru.net try to explain the basics of the OSI model as it relates to understanding basic computer networking.

The Internet and the TCP/IP family of protocols evolved separately from the OSI model. Often you find teachers, and websites, making direct comparison of the different models. Don't get too hung up on drawing direct comparisons between the two models. Our discussion here on the two networking reference models is address some commonly asked questions, and give some historical perspective as to how the models have evolved.


The Open Systems Interconnection Reference Model (OSI Reference Model or OSI Model) was originally created as the basis for designing a universal set of protocols called the OSI Protocol Suite. This suite never achieved widespread success, but the model became a very useful tool for both education and development. The model defines a set of layers and a number of concepts for their use that make understanding networks easier. The theoretical OSI Reference Model is the creation of the European based International Organization for Standardization (ISO), an independent, non-governmental membership organization that creates standards in numerous areas of technology and industry. The OSI model was first published in 1984 as ISO 7498: Information processing systems -- Open Systems Interconnection -- Basic Reference Model.

The Internet model is often compared to the OSI model. This internet model has many names such as the DOD reference model or the ARPANET reference model, because like the internet itself the TCP/IP protocol suite has evolved over the years. The ARPANET was the original name of the network we now call the internet. ARPA, currently known as DARPA, the Defense Advanced Research Projects Agency, is funded by the DoD (Department of Defense).

Unlike the International Standards Organization (ISO) where there is one main library of information that maintains specific standards, the internet is an ever evolving network with many entities working together to maintain standards. There is a collection of documents known as Request for Comments (RFC) maintained by the Internet Engineering Task Force (IETF) that describes various technology specifications.

Simple talk and some needed geek speak

Since TCP/IP is the primary networking language of the internet, everyone who works in the field of technology needs to have at least a simple understanding of how it works and its role in the big picture of the internet. In the spirit of the Guru 42 family of websites, we attempt to tackle the basic understanding using as simple terms as possible.

To understand the role of TCP/IP in the big picture of the internet, we need to delve just a bit into the geek speak of the internet. If you want to learn more, and really delve into how the internet works and the interesting history of the internet, an understanding of IETF and RFC's.

What is an RFC?

The concept of Request for Comments (RFC) documents was started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications.

In computer network engineering, a Request for Comments (RFC) is a formal document published by the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), and the global community of computer network researchers, to establish Internet standards. The Internet Engineering Task Force (IETF) develops and promotes voluntary Internet standards,

The IETF started out as an activity supported by the U.S. federal government, but since 1993 it has operated as a standards development function under the auspices of the Internet Society, an international membership-based non-profit organization.

Which came first the Internet model or the ISO model?

A question often asked is which network reference model came first. Various sources state that the ground work for the Open Systems Interconnection model (OSI Model) started in the 1970s by a group at Honeywell Information Systems. Other sources point to two projects that began independently in the 1970s to define a unifying standard for the architecture of networking systems. One was administered by the International Organization for Standardization (ISO), and one by the International Telegraph and Telephone Consultative Committee (CCITT).

RFC 871 published in September 1982 is one of the first formal descriptions of the ARPANET Reference Model (ARM). The the introduction of RFC 871 addresses the history of the internet model versus the ISO model.

"Since well before ISO even took an interest in "networking", workers in the ARPA-sponsored research community have been going about their business of doing research and development in intercomputer networking with a particular frame of reference in mind."

Is there an official document that explains the ARPANET Reference Model (ARM)?

RFC 871 was published in September 1982 as a recollection of the past by one of the developers ARPANET Reference Model as the author describes "as a perspective on the ARM." The author points out that the ARPANET Network Working Group (NWG), which was the collective source of the ARM, hasn't had an official general meeting since October 1971.

The four layer internet was defined in Request for Comments 1122 and 1123. RFC 1122, published October 1989, covers the link layer, IP layer, and transport layer, and companion RFC 1123 covers the applications layer and support protocols

The TCP/IP Model is not merely a reduced version of the OSI Reference Model with a straight line comparison of the four layers of the TCP/IP model to seven layers of the OSI model. As you read through many of the RFC documents on the IETF protocol development you will see direct statements that they are not concerned with strict layering such as section 3 of RFC 3439 which is titled: "Layering Considered Harmful."

The links below to RFC 1958 and 3439 will help you understand the general mindset of the developers of TCP/IP. RFC 1122 and RFC 1123 are the definitions of the four protocol layers of the TCP/IP model. As the constantly growing library of RFCs illustrates, the concept of the TCP/IP is a ongoing evolution.

References:

Request for Comments (RFC) http://www.ietf.org/rfc.html

Memos in the Requests for Comments (RFC) document series contain technical and organizational notes about the Internet. The Internet Engineering Task Force (IETF) is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet.

RFC 871: September 1982 https://tools.ietf.org/html/rfc871

A perspective on the ARPANET REFERENCE MODEL
Abstract: The paper, by one of its developers, describes the conceptual framework in which the ARPANET intercomputer networking protocol suite, including the DoD standard Transmission Control Protocol (TCP) and Internet Protocol (IP), were designed.

RFC 1122: October 1989 https://tools.ietf.org/html/rfc1122
This RFC covers the communications protocol layers: link layer, IP layer, and transport layer;

RFC 1123: October 1989 https://tools.ietf.org/html/rfc1123
This RFC covers the applications layer and support protocols.

RFC 1958: June 1996 https://tools.ietf.org/html/rfc195
Architectural Principles of the Internet

RFC 3439: December 2002 https://tools.ietf.org/html/rfc3439
Internet Architectural Guidelines
Extends RFC 1958 by outlining some of the philosophical guidelines to which architects and designers of Internet backbone networks should adhere.


Links to learn more:

Check out our site Geek History where we discuss the evolution of the ARPANET and TCP/IP

Why was the internet created: 1957 Sputnik launches ARPA
http://geekhistory.com/content/why-was-internet-created-1957-sputnik-launches-arpa

When was internet invented: J.C.R. Licklider guides 1960s ARPA Vision
http://geekhistory.com/content/when-was-internet-invented-jcr-licklider-guides-1960s-arpa-vision

In the 1960s Paul Baran developed packet switching
http://geekhistory.com/content/1960s-paul-baran-developed-packet-switching

The 1980s internet protocols become universal language of computers
http://geekhistory.com/content/1980s-internet-protocols-become-universal-language-computers

Photo: Interface Message Processor (IMP) ARPANET packet routing

Tags: 

The OSI model explained in simple terms

ComputerGuru -

Learning technology isn't sexy, but I am doing my best to keep it interesting. Here I take on the complex subject of the Computer Networking OSI model explained in simple terms. In our previous article, Understanding the mystical OSI Model explained in simple terms we used an analogy to illustrate the OSI model.

Why is the OSI Reference Model important?

Simply put the OSI Reference Model is a THEORETICAL model describing a standard of computer networking. The TCP/IP Reference model is based on the ACTUAL standards of the internet which are defined in the collection of Request for Comments (RFC) documents started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications.

The OSI model is important because many certification tests use it to determine your understanding of computer networking concepts. The OSI Reference Model is an attempt to create a set of computer networking standards by the International Standards Organization. A "Reference Model" is a set of text book definitions. You often learn something new by first learning text book definitions. The common protocol suite of computer networking is TCP/IP. The geeks who created TCP/IP were not as anal in creating a pretty "reference model." TCP/IP evolved over many years as it went from a theory to the concept of the internet.


The Internet and the TCP/IP family of protocols evolved separately from the OSI model. Often you find teachers, and websites, making direct comparison of the different models. Don't spend too much time trying to compare one versus the other. The two models were developed independently of each other to describe the standards of computer networking.

The TCP/IP Reference Model is not merely a reduced version of the OSI Reference Model with a straight line comparison of the four layers of the TCP/IP model to seven layers of the OSI model. The TCP/IP Reference Model does NOT always line up neatly against the OSI model. People try to hard to make neat comparisons of one model versus the other when there is not always a neat one to one correlation of each aspect.


The stated purpose of the OSI Model:

  • breaks network communication into smaller, simpler parts that are easier to develop.
  • facilitates standardization of network components to allow multiple-vendor development and support.
  • allows different types of network hardware and software to communicate with each other.
  • prevents changes in one layer from affecting the other layers so that they can develop more quickly.
  • breaks network communication into smaller parts to make learning it easier to understand.


The seven Layers of the OSI Model

The hierarchical layering of protocols on a computer that forms the OSI model is known as a stack. A given layer in a stack sends commands to layers below it and services commands from layers above it.

The seven layers in order from highest to lowest are Application, Presentation, Session, Transport, Network, Data Link, and Physical can be remembered by using the following memory aide: All People Seem To Need Data Processing.

The Application layer includes network software that directly serves the user, providing such things as the user interface and application features. The Application layer is usually made available by using an Application Programmer Interface (API), or hooks, which are made available by the networking vendor.

The Presentation layer translates data to ensure that it is presented properly for the end user, also handles related issues such as data encryption and compression, and how data is structured, as in a database.

The Session layer comes into play primarily at the beginning and end of a transmission. At the beginning of the transmission, it makes known its intent to transmit. At the end of the transmission, the Session layer determines if the transmission was successful. This layer also manages errors that occur in the upper layers, such as a shortage of memory or disk space necessary to complete an operation, or printer errors.

The Transport layer provides the upper layers with a communication channel to the network. The Transport layer collects and reassembles any packets, organizing the segments for delivery and ensuring the reliability of data delivery by detecting and attempting to correct problems that occurred.

The Network layer's main purpose is to decide which physical path the information should follow from its source to its destination.

The Data Link layer provides a system through which network devices can share the communication channel. This function is called media-access control (MAC).

The Physical layer provides the electro-mechanical interface through which data moves among devices on the network.

In the articles that follow we will break down each layer in more detail, covering topics you will need to know as a networking professional.
 

Tags: 

Understanding the mystical OSI Model explained in simple terms

ComputerGuru -

As you begin your quest to learn computer networking one of the first tasks you have before you is a basic understanding of the OSI model.

For many folks understanding the OSI model is like trying to understand some mystical formula that controls the way computer networks operate.

As we help you to begin your journey to understanding computer networking We will tackle explaining the complex subject of the computer networking OSI model simple terms in hopes that you will gain an understanding of the reasons behind the definitions

You can find a lot of resources that define the components of the OSI model, but an understanding of the reasons behind the definitions will go a lot way to fully understanding this complex technology model.

The acronym and the organization behind it can get confusing. The formal name for the OSI model is the Open Systems Interconnection model. Open Systems refers to a cooperative effort to have development of hardware and software among many vendors that could be used together. The model is a product of the International Organization for Standardization (2) which is often abbreviated ISO.


The logic behind the OSI model

Before we delve into the OSI model, let us take a moment to understand the organization behind it. You may have seen the term ISO certified in various technology areas. ISO, International Organization for Standardization, (1) is the world's largest developer and publisher of International Standards. ISO helps to manage and create many international standards in many technical areas to insure the same quality of a product or process regardless of location or company.

The OSI (Open Systems Interconnection) model provides a set of general design guidelines for data communications systems and gives a standard way to describe how various layers of data communication systems interact. Applying the logic of the ISO standards to computer networking, a computer component, or computer software needs to comply to set of standards so that the product or process will work no matter where in the world we are, and no matter who is the world is producing it.

Putting the OSI model into perspective

Strive for a good understanding of the intent of the model and a few of the core principles, that will go a long way in an overall understanding of computer networking. Do not focus on the intricate details of the OSI model at first, as the more you read the more confused you may get. The model was created in the 1970s and the technology is ever changing. Many text books will contradict each other on some aspects of the upper layers. Some of the reasoning behind the upper layers are for processes that are not nearly as useful today as they were many years ago, and for that reason many other network models will blend together the upper three layers into a single layer.

Basic definitions of the OSI Model

The seven layers of the OSI Model can be remembered by using the following memory aide: All People Seem To Need Data Processing. As you say the phrase, write down the first letter of each word, and that will help you to remember the seven layers in order from highest to lowest: Application, Presentation, Session, Transport, Network, Data Link, and Physical. We will briefly discuss the lower four layers from the bottom up.

Layer one, the Physical layer provides the path through which data moves among devices on the network.

Layer two, the Data Link layer provides a system through which network devices can share the communication channel.

Layer three, the Network layer's main purpose is to decide which physical path the information should follow from its source to its destination.

Layer four, the Transport layer provides the upper layers with a communication channel to the network.

An analogy to understand the model

Some of reasons behind the OSI model are, to break network communication into smaller, simpler parts that are easier to develop and to facilitate standardization of network components to allow multiple vendor development and support.

Let's take the reasons behind the OSI model and apply them to something totally different to illustrate how they are used. If we wanted to start a railroad and build a new type of train from scratch, and we wanted this train to be able to use existing train tracks, and existing train stations so our new system could get up and running quickly, we would need to understand what existing standards are currently in place.

Even if we never had to build a set of train tracks we would need understand the standards by which train tracks were build and designed so we could assure our train could operate on them, and how the track is shared. Likewise, in order for components to operate, manufactures must understand the track, layer one, and how the track is shared, layer two.

If we are building trains, not train stations, we need to know the size and shape of other vehicles using the tracks so our trains could use the same track as all the other trains. Layer one of the OSI model gives us the path, or the track we use for communication. Layer one, referred to as the media, is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or radio spectrum technology.

Once you have more than one train on the track, you need to find a way to share the track. Layer two provides a system through which network devices can share the communication channel, or in the case of our analogy, share the track. One of the functions of layer two is called media access control (MAC). If you think about the term media access control you can break it down into the two parts it represents, the media or the track, and access control, or the sharing of the track.

In the OSI model layers one and two represent the the media, or the physical components. Layers three through seven represent the logical, or the software components.

In layer three of the OSI model, the Network later, the logical decision is made to decide which physical path the information should follow from its source to its destination.

In order to continue our analogy to understand this complex set of rules, think of the track system that has already been built as layers one and two. Once this track system is in place we need a system to control the routing of the train system that runs on the tracks. Think of layers three through seven as processes which affect the train itself, which would represent the actual package of information being transported along the tracks. The main purpose of layer three is switching and routing.

Layer four of the OSI model, the transport layer ensures the reliability of data delivery by detecting and attempting to correct problems that occurred. In terms of our analogy, think of this as a set of standards and procedures that allows our train to arrive safely at its destination in a timely manner.

Learning and understanding the OSI model can be confusing.. The goal of this article was not meant to define the layers of the OSI model from purely a technical nature, but to offer an analogy to understand why it is needed and how it used to establish standards for data communications. In our next article we will go over the basic definitions of all the layers of the OSI model.


Sources:
(1) http://www.iso.org/iso/about.htm
(2) http://www.iso.org/iso/home.html

Tags: 

Pages

Subscribe to Geek History aggregator - Geek News