Geek News

Thomas Edison launched the modern electric utility industry

GeekHistory II -

All major inventions were an evolution of ideas and inventors over many years. Many light bulbs were invented before Edison's that worked in the laboratory and for short-term demonstrations. There were more that twenty inventors that filed patents for various versions of the incandescent lamp before Edison, and there have been dozens of inventors that have filed patents for incandescent lamps since Edison.

In the mythology of famous scientists and inventors, there is the eureka moment, that's when some totally new idea or totally new theory is discovered. Thomas Edison's eureka moment was not in inventing the light bulb but in creating a carbon-filament lamp in a vacuum. This one improvement of the concept of the light bulb created the first commercially practical incandescent light. Edison's first attempts lasted a little over half a day, but eventually his efforts led to a bulb that could burn for 1,200 hours.

Edison's success went beyond the incandescent light bulb to developing an entire integrated system of electric lighting. Thomas Edison presented to the world a complete system of commercial electric lighting and power using a DC (Direct Current) generating station.

Thomas Alva Edison (1847 - 1931) was a legendary inventor that saw the need for improving upon existing ideas.  Thomas Edison was influenced by the work of many inventors in Europe that were moving forward in the 1870s. Using the dynamo as a power source, Pavel Yablochov invented the the Yablochkov Candle in 1876. Yablochkov's inventions improved on previous arc light designs and proving that the installation of electric lighting economically feasible.

Edison saw that arc lighting was becoming popular as an outdoor form of lighting, he improved upon the concept of lighting creating a more practical and efficient of the incandescent light bulb. With his improved invention of the Edison bulb, he created a demand for a source of electrical power.

When we start telling the story that begins with, "when Thomas Edison invented the light bulb," we are usually quickly attacked by someone screaming, "Edison didn't invent the light bulb!" Well, in one sense that is true, Edison did not invent the incandescent light bulb. But, when you step back and look at the big picture you could say that not only did Thomas Edison introduce the world to the incandescent light bulb, Thomas Edison launched the modern electric utility industry with the creation of the Pearl Street station in lower Manhattan in 1882.

From Edison Electric to General Electric

The biggest mistake of Edison's career was his refusal to acknowledge the limitations of DC power. By the time the War of Currents ended around 1893, Thomas Edison was no longer in control of Edison Electric. In 1892 Thomas Edison lost control of his own company, as financier J. P. Morgan merged Edison Electric with the Thomson-Houston Electric Company to form General Electric.

Even thought the War of Currents was short lived, roughly from 1886 through 1893, the rivalry of the Edison team (which became part of the General Electric Company) versus the Westinghouse team lived on in many ways.

Charles P. Steinmetz (1865-1923), began his career as a draftsman at the Osterheld and Eickemeyer company in 1889, which was acquired by General Electric in 1892. The Osterheld and Eickemeyer company, along with all of its patents and designs, was acquired by the newly formed General Electric Company, because of their expertise in the area of electrical power and transformers.

Charles Proteus Steinmetz understood electromagnetism better than anyone of his generation and while working for General Electric he worked on the team that developed the some of the world's first 3 phase electrical systems. General Electric was the company formed by the merger of Edison Electric and Thomson-Houston Electric Company. Ironic when you consider that Edison originally fought against the use of AC power, and now General Electric would now switch gears from Edison's ideas on DC power distribution and embrace the work of Steinmetz in the areas of AC circuit theory and analysis.

Even though Edison was not at the helm of General Electric, the interactions between Steinmetz and Edison are source for many legendary stories. One famous story is the $10,000 bill sent to Henry Ford for services performed by Steinmetz to repair an electric generator. When asked for an itemized bill, Steinmetz responded personally to Ford’s request with the following: Making chalk mark on generator $1, Knowing where to make mark $9,999.

Elihu Thomson (1853-1937), invented the 3 coil dynamo, which was the basis for a successful electric lighting system he produced in 1879 through the Thomson-Houston Electric Company. Elihu Thomson and E. J. Houston established the Thomson-Houston Electric Company in Philadelphia in 1879. Thomson-Houston Electric Company merged with the Edison General Electric Company to become the General Electric Company In 1892. Thomson was elected chief engineer of General Electric producing many of the fundamental inventions for the newly formed company.

When we speak of the great engineers who lead the Westinghouse Company we think of William Stanley followed by Benjamin Lamme. When the great engineers who lead the General Electric Company the names Charles P. Steinmetz and Elihu Thomson rise to the top of the list. Neither Steinmetz or Thomson worked directly for Edison, but became members of the General Electric team when their companies were acquires by the General Electric Company.

Graphic: Charles P. Steinmetz and Thomas A. Edison


Who is responsible for electricity and AC power in our homes

GeekHistory II -

In the previous article we looked at the answer to who contributed to the development of electricity and AC power, by drawing attention to the work of various European inventors that were the establishing the ideas and principals that were used by Thomas Edison or Nikola Tesla.

The War of Currents

The War of Currents was much more than a battle between two crazy inventors, and the efforts to electrify our world was the work of many inventors and engineers. Just as it is impossible to pin point one single invention or one single inventor as the eureka moment when the Internet was invented, the same can be said of the development of electricity and AC power distribution. There are many names from that generation that all played a significant part in the development of bringing electricity to our homes and AC power distribution.

The War of Currents was started as a battle between George Westinghouse and Thomas Edison, Nikola Tesla was not a member of team Westinghouse when it started. The War of Currents started not long after Westinghouse created the Westinghouse Electric Company in 1886. Edison was creating DC power plants and felt threatened by Westinghouse who had been experimenting with AC Power and was ready to start rolling it out commercially. Edison began a public media campaign claiming that high voltage AC systems were inherently dangerous.

By the time the War of Currents ended Thomas Edison was no longer in control of Edison Electric. In 1892 Thomas Edison lost control of his own company, as financier J. P. Morgan merged Edison Electric with the Thomson-Houston Electric Company to form General Electric.

George Westinghouse and the Westinghouse Electric Company would have two decisive victories over General Electric in 1893, first winning the bid to light the 1893 World's Columbian Exposition in Chicago, followed by the getting the contract for building a two phase AC generating system at Niagara Falls.

Westinghouse Electric engineers

William Stanley (1858-1916) was an inventor and engineer that played a significant part in the development of AC power distribution that seldom gets mentioned. The Westinghouse Electric Company was started in 1886 with William Stanley Jr. as chief engineer. William Stanley created the first full feature AC power distribution system using transformers in Great Barrington, Massachusetts In 1886, a project funded by Westinghouse.

The work of William Stanley in the 1880s was critical to the success of Westinghouse. In 1890 Stanley decided to sever his ties with Westinghouse and formed the Stanley Manufacturing Company. Different sources tell different stories of why Stanley had a falling out with Westinghouse, mainly over money. Stanley had ambitions of creating his own electric company on a scale to compete with Edison, and Westinghouse. In 1903 General Electric (GE) acquired the Stanley Manufacturing Company.

Nikola Tesla (1856-1943) was a Serbian born inventor who grew up in an area of the Austro-Hungarian Empire that is the modern-day country of Croatia. Most of Nikola Tesla's early inventions fell into the categories of electrical power distribution or motors and generators. In 1884, at age 28, Tesla left Europe and headed for New York City in search of Thomas Edison. Tesla was interested in AC (alternating current) systems and was looking to impress Edison with his ideas on AC systems. Edison wasn't interested in hearing about AC, as Edison was developing DC (direct current) electrical power systems.

In 1888 Tesla presented to the American Institute of Electrical Engineers his polyphase alternating current system in the report “A New System of Alternating Current Motors and Transformers.” George Westinghouse was a visionary businessman and inventor who saw the possibilities of Alternating Current (AC) as the primary form of delivery electricity. Westinghouse saw Tesla's ideas as something he could use in his quest to develop AC, and purchased Tesla's alternating current patents. Westinghouse also paid Tesla to work with the Westinghouse team until the patents were fully implemented.

Oliver Blackburn Shallenberger (1860 – 1898) was an American engineer and inventor, best known for inventing the watt-hour meter, a device that measured the amount of A.C. current and made possible the business model of the electric utility. In 1884 Oliver Shallenberger went to work for The Union Switch and Signal Company, a supplier of railway signaling equipment founded by George Westinghouse. The results of Shallenberger's work at the Union Switch and Signal Company led to his appointment to Chief Electrician at the Westinghouse Electric Company. Shallenberger oversaw the development of the Tesla Polyphase System.

Benjamin Garver Lamme (1864 - 1924) designed much of the apparatus for the Westinghouse exhibit at the Columbian Exposition in Chicago in 1893. Benjamin Lamme was the engineer that expanded upon Nikola Tesla's patents, purchased by Westinghouse, in designing the Niagara Falls generators that lead to Westinghouse's victory in the War of Currents. In 1918 Lamme received the Edison Medal for his contributions to the electrical power field. Another irony, considering Lamme helped to develop AC power distribution, Edison was orginally against AC power distribution.

George Westinghouse (1846 - 1914), the son of a New York agricultural machinery maker, came to Pittsburgh in 1868 in search of steel for a new tool he designed to guide derailed train cars back onto the track. Before he left Pittsburgh to retire back to New York, Westinghouse gave the world safer rail transportation, steam turbines, gas lighting and heating, and brought electricity to the average American's home.

George Westinghouse wasn't the inventor of AC power, but he had the vision to bring it all together. Edison turned away great engineers for talking about AC development, while Westinghouse was making them members of his team, and buying AC patents developed in Europe for use in America. George Westinghouse proved to the world the concept of AC power distribution by winning the bid to provide lighting for the World's Fair Columbian Exposition of 1893. Westinghouse installed a complete polyphase generation and distribution system with multiple generators.

Who is responsible for electricity and AC power in our homes?

Does Thomas Edison or Nikola Tesla deserve all the credit? What about William Stanley, Benjamin Garver Lamme, Oliver Shallenberger, or George Westinghouse? Who is to say who contributed more to the development of electricity? They all contributed!

Graphic: Westinghouse Electric engineers William Stanley and Benjamin Lamme


Who contributed to the development of electricity and AC power

GeekHistory II -

Just as it is impossible to pin point one single invention or one single inventor as the eureka moment when the Internet was invented, the same can be said of the development of electricity and AC power distribution. There were many inventors working on various parts which came together.

Who contributed more to the development of electricity and AC power distribution?

Are you looking for a single name, like Thomas Edison or Nikola Tesla? People often talk about the "War of Currents" as the great battle between Edison and Tesla to develop a system for the distribution of electrical current. During the War of Currents, Edison lost control of Edison Electric as it merged with Thomson-Houston Electric Company to form General Electric, and Nikola Tesla was one member of a team of engineers working for Westinghouse Electric. George Westinghouse is every bit as much responsible for our current system of AC power in America, arguably more responsible that Thomas Edison. But the world remembers Edison, much more so than Westinghouse.

Many internet memes spread posters about The War of Currents presenting it as a technology battle between Thomas Edison or Nikola Tesla. Both men were great inventors, but they lived in a time when many people were working in developing the concepts of electric lights and the distribution of electrical current. What is often not mentioned in the telling of the "War of Currents" stories is that many of the America inventions were based on the work of various European inventors that were the establishing the ideas and principals that were used by Thomas Edison or Nikola Tesla.

European inventors before Edison and Tesla

Edison did not invent the concept of lighting or the electrical distribution system. Thomas Edison was influenced by the work of many inventors in Europe were moving forward in the 1870s such as Pavel Yablochkov.

Pavel Yablochkov (1847-1894) was a Russian electrical engineer who invented the earliest commercially successful arc lamp known as the Yablochkov Candle. During the Paris Exposition of 1878 introduced his lighting system to the world installing 64 of his arc lights along a half mile length of streets. Yablochkov made the installation of electric lighting economically feasible. The intensely bright light created by the arc lamp was great for lighting the outdoors, but it was not practical for indoor use.

Nikola Tesla did not invent the concept of Alternating Current and electric motors. Scientists and inventors such as Michael Faraday and Hippolyte Pixii were working with Alternating Current and electric motors in the early 1800s, years before Tesla was born.

Michael Faraday (1791-1867) British physicist and chemist, demonstrated the first simple electric motor in 1821. Faraday published the results of his experiments of producing an electrical current in a circuit by using only the force of a magnetic field in 1931. Faraday's discovery is known as Faraday’s Law of Electromagnetic Induction.

Hippolyte Pixii (1808–1835) was an instrument maker from Paris. Pixii built an early form of alternating current electrical generator in 1832, based on the principle of magnetic induction discovered by Michael Faraday.

George Westinghouse looks to Europe

As George Westinghouse began studying the debate surrounding AC (alternating current) versus DC (direct current) he looked to various European inventors for ideas and inspiration for AC designs.

The ZBD Transformer, created in 1878, was based on the work of Károly Zipernowsky, Ottó Bláthy, and Miksa Déri of the Austro-Hungarian Empire First designed and used the transformer in both experimental, and commercial systems. The Ganz Company uses induction coils in their lighting systems with AC incandescent systems. This is the first appearance and use of the toroidal shaped transformer. The reliability of AC technology received impetus after an 1886 installation by the Ganz Works that electrified much of Rome, Italy.

A power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881. In 1884 Lucien Gaulard's transformer system on display at the the first large exposition of AC power in Turin, Italy.The 25 mile long transmission line illuminated arc lights, incandescent lights, and powered a railway.

Westinghouse purchased the American rights to Gaulard and Gibbs patents for AC current transformers. The transformers initially designed for the Westinghouse company were originally based on Gaulard-Gibbs A.C. transformer designs that the company had imported for testing. Westinghouse and his staff worked on improving and redesigning the transformers, and the Westinghouse Electric Company was started in 1886.

Galileo Ferraris (1847-1897) was an Italian physicist and electrical engineer known for introducing the concept of the rotating magnetic field, and the invention of the rotating magnetic field asynchronous motor. Ferraris was involved in early experiments in AC power distance transmission which occurred in Germany and Italy in the early 1880s.

Nikola Tesla patents provide the final piece

Westinghouse was in a race to be the first company to commercially develop AC power, and George Westinghouse saw that Nikola Tesla's U.S. patents for his AC induction motor and related transformer design were the quickest way to make the final push to win the War of Currents. Nikola Tesla was also hired for one year to be a consultant at the Westinghouse Electric & Manufacturing Company's Pittsburgh labs.

Some sources say the discoveries and inventions of Nikola Tesla and Galileo Ferraris regarding the invention of induction motor where made entirely independently of each other. Some sources name Galileo Ferraris as the inventor of induction motors based on his research of the rotary magnetic field started in 1885. Some sources name Nikola Tesla as the inventor of induction motors based on his filling of US patent 381968 on May 1, 1888.

Not taking any chances as to who did it first, Westinghouse also purchased a U.S. patent option on induction motors from Galileo Ferraris.

Was Nikola Tesla a patent thief?

In the world of the modern Internet Thomas Edison is often called a patent thief who took advantage of the great inventor Nikola Tesla. Ironically, there is a case to be made that the Polyphase Electric Motor, the invention that made Nikola Tesla famous, was based on a design that Tesla copied from from Italian inventor Galileo Ferraris.

Westinghouse engineer William Stanley stated in a letter to the Electrical Review published in March, 1903, "I myself have seen the original motors, models, and drawings made by Ferraris in 1885, have personally talked with the men who saw these models in operation and heard Ferraris explain them at that date."

Graphic: The great triad of Miksa Deri, Otto Titusz Blathy, and Karoly Zipernowsky (left to write) connected by the invention of the transformer and worked at the famous Ganz factory in Budapest.


Computer networking packet switching explained in simple terms

ComputerGuru -

Throughout the standard for Internet Protocol you will see the description of packet switching, "fragment and reassemble internet datagrams when necessary for transmission through small packet networks." A message is divided into smaller parts know as packets before they are sent. Each packet is transmitted individually and can even follow different routes to its destination. Once all the packets forming a message arrive at the destination, they are recompiled into the original message.

Internet data, whether in the form of a Web page, a downloaded file or an e-mail message, travels over a system known as a packet-switching network. Each of these packages gets a wrapper that includes information on the sender's address, the receiver's address, the package's place in the entire message, and how the receiving computer can be sure that the package arrived intact.

There are two huge advantages to the packet switching. The network can balance the load across various pieces of equipment on a millisecond-by-millisecond basis. If there is a problem with one piece of equipment in the network while a message is being transferred, packets can be routed around the problem, ensuring the delivery of the entire message.

Packet switching explained in simple terms

In teaching the concept of packet switching in the classroom, I would take a piece of paper with a message written on it, and from the front of the classroom, ask the person in the front seat simply to turn around and pass the paper to the person behind him, and in turn continue the process until the paper made it to the person in the back row.

In the next phase of the illustration, I would take the same piece of paper that had the message written on it, and tear it into four pieces. On each individual piece of paper I would address it as if sending a letter through the postal service, by writing my name as the sender, and also the name of the person in the back of the room as the recipient. I would also label each individual piece of paper as one of four, two of four, three of four, and four of four.

This time I would take the four individual pieces of paper and walk across the front row, and as I handed one piece of paper to four different students, I would explain to them who was to receive the paper, and asked them to pass it to the person marked as the recipient by using the people behind them. When all four pieces of paper arrived at the destination, I would ask the recipient to read the label I had put on each piece of paper, and confirm they had received the entire message.

My original passing of the paper represented Circuit switching, the telecommunications technology which used circuits to create the virtual path, a dedicated channel between two points, and then delivered the entire message.

My second passing of the "packets" or scraps of paper illustrated packet switching, and each individual in the room acted as a router. The key difference between the two methods was the additional routes that the pieces of the message took. A very primitive, but effective demonstration of packet switching and the way in which a message would be transmitted across the internet.

Once the concept of packet switching was developed the next stage in the evolution was to create a language that would be understood by all computer systems. This new standard set of rules would enable different types of computers, with different hardware and software platforms, to communicate in spite of their differences.

Geek History: In the 1960s Paul Baran developed packet switching


Who discovered electricity?

GeekHistory II -

Asking who discovered electricity is the equivalent to asking who first discovered fire. Electricity existed before humans walked the earth. You could probably make the case that the first human to discover fire also discovered electricity as they watched a bolt of lightning strike the earth to start a fire. The bolts of static electricity we see in the sky in the form of lightning during a thunderstorm show the power of electricity.

Ancient writings show that various cultures around the Mediterranean knew that rods of amber could be rubbed with cat fur or silk to attract light objects like feathers. Amber is fossilized tree resin gemstone used in making a variety of decorative objects and jewelry. Amber has been used as a healing agent in folk medicine. The first particle known to carry electric charge, the electron, is named for the Greek word for amber, ēlektron.

If you are looking for a name of someone "who discovered electricity" you could possible look to the Greek philosopher Thales of Miletus (624 B.C. to 546 B.C.). Thales was known for his innovative use of geometry, but his writings are some of the first to document the principles of magnetism and static electricity. Thales documented magnetism through his observations that loadstone attracts iron, and static electricity through his observations of static electricity by rubbing fur on substances such as amber.

Some stories claim that various artifacts found shows some electricity production was possible in the Middle East thousands of years ago. For telling the story here at Geek History, and busting the myth that Benjamin Franklin discovered electricity we will start in more modern times offering the name of William Gilbert as the first person to define electricity around 1600. Each person on the list that follows contributed to our modern understanding of electricity.

William Gilbert (1544-1603) is regarded as the father of electrical engineering and one of the first scientists to document the concept of electricity in his book De Magnete published in 1600. William Gilbert made a careful study of electricity and magnetism and defined the distinction between electricity and magnetism in his series of books. Gilbert coined the term electricity from the Greek word elecktra.

Robert William Boyle (1627-1691) is regarded as the first modern chemist and one of the pioneers of modern experimental scientific method. Boyle is also credited with experiments in the fields electricity and magnetism. In 1675, Boyle published "Experiments and Notes about the Mechanical Origine or Production of Electricity."

Benjamin Franklin (1706 - 1790) is often credited in various books and websites as having discovered electricity in the 1750s. The legendary story of Franklin's experiments with flying a kite in a thunderstorm allegedly took place in 1752. Although Franklin was quite a scientist and inventor, which included inventing the lightning rod, scientists such as William Gilbert and Robert William Boyle began documenting the concept of electricity long before Franklin's experiments.

Alessandro Volta (1745-1827) was an Italian physicist that is regarded as one of the greatest scientists of his time. Before we move on to the next section where we look at AC power distribution we give thanks to Alessandro Volta, the scientist who discovered that particular chemical reactions could produce electricity. Volta invented the first battery in 1799 known as the Voltaic Pile. The unit of electromotive force, the volt, was name to honor Volta.

Michael Faraday (1791-1867) British physicist and chemist, demonstrated the first simple electric motor, in 1821, in London. The original "science guy," in 1826 Faraday founded the Friday Evening Discourses and in the same year the Christmas Lectures for young people at the Royal Institution. In 1832 Faraday demonstrated that three types of electricity thought to be different that induced from a magnet, electricity produced by a battery, and static electricity were in fact all the same. Faraday introduced several words into the electricity vocabulary such as ion, electrode, cathode, and anode.

James Clerk Maxwell (1831-1879) introduced his mathematical conceptualization of electromagnetic phenomena to the Cambridge Philosophical Society in 1855. The Scottish physicist's best-known discoveries concern the relationship between electricity and magnetism and are summarized in what has become known as Maxwell’s Equations. Maxwell's pioneering work during the second half of the 19th century unified the theories of electricity, magnetism, and light.

Graphic: Long before television Michael Faraday nineteenth century scientist and electricity pioneer took science to the people as illustrated here delivering the British Royal Institution's Christmas Lecture for Juveniles during the Institution's Christmas break in 1856.

Learn More:

George Westinghouse used Tesla power to defeat Edison in Currents War


README 1ST GeekHistory II the sequel

GeekHistory II -

The idea for the website GeekHistory started when I was teaching Internet and web building courses in 1996. I would start each course with a brief history lesson showing the evolution of the internet that started in the 1960s. Some students commented that it was a boring waste of time, some students praised it as an interesting and information introduction to the course.  It seems that history is a topic that people either love it or hate it.

Because of many positive comments by students on the brief history on the internet lesson I registered the domain back in 2001 with the hopes of developing a history of technology website. I still have a lot of notes collected over the years. With web site URLs as references for my material. some of my resources are notes from websites that no longer exist. Very few of the sites still exist in the from they did back then. I found a lot of good reference material on the Altavista website. Thankfully I printed a lot of that content and have paper copies of the material in a binder.

GeekHistory was just a shell of a website for many years, just an idea bouncing around in my brain. After more than a decade of owning the domain name I finally started devoting time to building the website on the history of technology. In recent years I have immersed myself into research on various topics, looking for the original sources, in order to tell the story of the history of technology based on various generations of ideas and timelines.

We are developing the website GeekHistory like a book with chapters focused on various generations of inventors and inventions.  As we sort through all the information we have gathered over the years, and continue to sort through, we decided to create the companion website GeekHistory II more in the format of an almanac with various lists, fast facts and quick answers to simple questions.

The goal of GeekHistory

My lifelong love of history and technology comes together at GeekHistory. I began working with radios and telecommunications in the Army National Guard in the 1970s and my first certification was a FCC general class radiotelephone license. A life long evolution from field service technician for various office automation companies through my current career in systems administration and telecommunications has inspired me as a writer and web developer of technology topics.

Even though my personal collection of material for the study of geek history dates back to my early days in technology as far back as the 1970s, I am always finding new questions and new myths and legends to address. Through question and answer, Twitter wars, and various other social media outlets, I keep running across myths and misinformation represented as facts, sending me off on a quest to find the truth. Anytime a claim is made or a fact is stated from a website or blog that does not appear to have first hand knowledge of the subject I make a note to follow up on it.   I am continuously finding articles by allegedly credible newspapers and magazines and respected organizations that are based on popular myths, which sets me off in search of original sources of information to find the truth.

I am not a university professor with a team of editors and advisers working with me developing a website. I am one man who loves technology and history and is amazed by how little people know about the great minds in the world of technology. Geek History is not meant to be an authoritative source for technology history. We are just trying to get you to think about the many amazing people that have contributed to the work of technology. Our goal is to increase awareness, educate, and entertain.

One of my inspirations for the Guru42 Universe is the Oliver Wendall Holmes quote, "Man's mind once stretched never goes back to its original dimension." The more I learn about geek history, the more questions I have, and the more I want to know.

The who invented myth and eureka moment that never happened

GeekHistory II -

Every question that begins with "who invented" should get this as an auto response, "it is usually a fallacy to credit a single individual with the invention of a complicated device. Complicated devices draw on the works of multiple people."

We spend a lot of time looking where to give credit to people for various invention when they were nothing more than the next step in the evolution of the world of technology.

Inventions during the Industrial Revolution involved a series of new devices and creations where man power, and literally horse power, was being replaced by machines. From steam engines that turned manual labor in mechanical contraptions, to the automobile, that turned the horse power of a live horse, to the horse power of an internal combustion engine. The inventions of the industrial age were an evolution of doing existing things in very new ways. The 18th century idea of an invention was genuinely more individual and less systemic.

It was a different world in the industrial age of the late 1800s and early 1900s. The greatest minds and the greatest laboratories were not inventing things at universities, but were working in what resembled an industrial machine shop. Thomas Edison institutionalized the concept of the individual inventor, his invention factory took the concept of one man in a lab tinkering with an issue and changed it into project management where one man hired a team to do more than he could as an individual. People say that Edison stole ideas because he had other people do the experiments and he took credit. No, that was the real genius, he created the invention factory. There are many menial tasks that need done, he automated the process.

When the internet and personal computers were being developed in the 1960s and 1970s, most of the geeks were doing their work at universities, much of the work sponsored by government agencies like DARPA (Defense Advance Research Projects Agency.)

What does it take to become a great inventor?

Being an inventor is not a field of study, it is a state of mind. Great inventors, innovators, industrialists, all had one thing in common, a passion for their ideas, and a passion to turn their visions into reality. There are endless stories of "inventors" who were always tinkering with things. They had a burning desire to understand how things worked.

Using a tree branch to help us pry something apart, we have invented a lever. Using a tree trunk that rolls to help us move something heavy, rather than dragging it across a flat surface, we have the beginnings of a wheel. As these very simple solutions to very simple problems became refined, they become inventions.

The nature of man is solving problems, and the solutions to these problems are inventions. And the successful inventor will tell you, it is more than just having an idea, it is turning that idea into something people can use.

Inventor or innovator?

Often there is a bit of a smug attitude that favors giving someone credit for an invention versus just being an innovator. A good example for my thought is remarks I've seen is regarding Henry Ford, "he didn't invent anything."

Even if Henry Ford invented nothing, he changed everything. Ford did not invent the automobile, Ford did not invent the assembly line. What Ford did is improve upon the assembly line with a passion that drove down the price of an automobile significantly. He turned the automobile from just a rich man's toy, to something the average American could afford. Ford improved upon the design of the automobile and the assembly line and revolutionized an industry.

The concept of the automobile, and specifically the electric automobile, is an idea that has been around for more than 100 years. Henry Ford thought about electric automobile, as did other inventors, over a hundred years ago. But what is one of the hottest topics in modern technology? The electric car? There is a fascination in recent years of the work of Tesla Motors and recently Faraday Future made news with the showing of a new electric automobile prototype.

Isn't technology an ongoing evolution of ideas and innovations? Do you see the work of modern electric car companies like Tesla Motors and Faraday Future as inventing new things or combining existing things? The more important question I would ask, is why does that distinction even matter?

In search of the glorified eureka moment

There are many special individuals have those eureka moments, where one idea changes everything. There are visionaries who have an idea and see what is possible before the technology exists to make it real. There are inventors who take visions and made them real. There are innovators who take a good invention and make it great. There are the industrialists who take an invention and develop it into an industry.

Study people to learn from their success, and their failures. Try to understand when a burning desire can turn into a dangerous obsession.

Question everything. Find something that really interests you, and learn everything you can about the topic. How does it work, how could it be made better.

Geeks introduce us to brave new worlds, with visions of the future. Geeks pick up where others left off, to turn a vision into a reality.


Wondering about the dark web and the forbidden fruit of the internet

Guru 42 Blog -

The phrase forbidden fruit typically refers to engaging in an act of pleasure that is considered illegal or immoral. That fits the mold of many questions I am often asked, such as what are some of the illegal or immoral websites you can find on the mysterious and mythical part of the internet known as the dark web.  The mysterious dark web, sometimes called the dark net, is the fuel for spy movies. it helped to create WikiLeaks run by the super spy Julian Assange and it allows cyber snitches like Edward Snowden share secret information. People are axious to know how to find what is hinding beneath the surface in the dark web.

According to remarks made by Roger Dingledine at a recently Philly tech conference, the overall perception of the dark web is more mythical than factual.  Roger Dingledine is an MIT-trained American computer scientist known for having co-founded the Tor Project, aka "the dark web."  Dingledine spoke at the Philly Tech Week 2017 putting some of the myths and legends of "the dark web" into perspective.

The worldwide network known as “the dark web” uses specially configured servers designed to work with custom configured web browsers with the purpose of hiding your identity. You will see the term Tor servers and web browsers to describe this private network. Tor originally stood for "The Onion Router."  The Tor Project, Inc is a Massachusetts-based research-education nonprofit organization founded by computer scientists Roger Dingledine, Nick Mathewson and five others. The Tor Project is primarily responsible for maintaining software for the Tor anonymity network.

If you are looking for all that forbidden fruit hiding beneath the surface, according to Dingledine no more than one to three percent of the Tor Network’s traffic comes from “hidden services” or “onion services”, services that use the public internet but require special software to access. Dingledine claimed that onion services basically do not exist. He added that it’s a nonsense that there are “99 other internets” users can’t access.

One popular way often used to describe the deep web and dark net is to use a graphic of an iceberg. Dingledine advises his audience not to pay attention when someone uses the iceberg metaphor, and criticized the news providers who use the “iceberg metaphor” for describing the darknet and the deep web.  According to Dingledine, just about any use of the “dark web” phrase is really just a marketing ploy by cybersecurity firms and other opportunists.  So the forbidden fruit you were hoping to find really is just a myth after all.

Learn more:

People are fascinated about what you can find on the dark web, but have no idea what it all means. Learn more from Guru42 in this article where I go over the basic definitions with links to learn more: Buzzwords from the world wide web to deep web and dark net

Referencing Roger Dingledine at Philly Tech Week 2017 here are some links about that event:

Stop Paying Attention When Someone Uses The Iceberg Metaphor For The Dark Web

Stop talking about the dark web: Tor Project cofounder Roger Dingledine







What you need to know before buying a computer

Guru 42 Blog -

At last the secret of what you need to know before buying a computer is revealed, there is no one size fits all answer. But you don’t need to be a world class geek to learn computer buzzwords and understand some basic concepts before you shop for your next computer.

I usually try to stay out of the Apple versus Microsoft debates. Since I am updating some content on desktop operating systems on I thought I would use this blog post to address the often asked question of "what computer should I buy" and add this perspective. I will also  introduce a few new articles to answer some frequently asked questions relevant to someone shopping for a computer.

Recently on an online forum the question of "what computer should I buy" was asked based on the idea that a MacBook Pro is inherently the best laptop out there. The person asking the question was looking for reasons to buy a MacBook Pro, but gave no clues on how they are going to use it. That is a very important factor in answering the question! I never answer any questions on "what computer should I buy" for friends and family until I ask several questions.

I laughed as I read one of the answers that stated, "If all you are going to do is web surfing, social media, and email you don’t need a MacBook Pro." Yea, that's right. There are Chromebooks as well as cheap Windows notebooks that could do that for a lot less money!

My best advice to anyone looking to buy a computer, think long and hard about how you are going to use it, and find other people with the same wants and needs, and ask them what they own, what they like and not like about it.

I am not a graphics designer or an artist, those are the type of users who are typically the Apple fans. I have been working in enterprise computer networking for more than 20 years, started working on desktop computers in the 1980s. I look at the computer as a tool, and I look at what is the best tool for the task at hand. I have no loyalties to any specific brands.

Many answers comparing Microsoft to Apple often use various luxury car to cheap foreign comparisons, implying if you could afford the expensive luxury car, but choose otherwise, you must be a fool. So let me run with that analogy.

Take a step back and look at the history of Apple versus Microsoft.  In the 1990s when Windows 95 dominated the desktop, Microsoft was the Ford F-150 pick up truck.  Not many people would describe the Ford F-150 pick up truck as a sexy luxury vehicle, but many would describe it as the work horse vehicle that gets the job done.  There's a good case to be made that the folks marketing to the pick up truck users have a different plan than those looking to sell the sexy luxury vehicle.

A computer is a tool I use for work, as well as recreation. I work in a business world that is Microsoft based. We are required to purchase a specific brand of Windows based computers, not my favorite brand, but that's my environment. My problems are no so much with Windows as it is the vendors that support our users create applications that run on old Microsoft operating systems. I have to deal with home cooked applications that are designed for last generation Windows computers. That's my world.

I have had iPads and various other Apple products in my home, and they never got used. Even if the interface is slightly different, I don't have time to deal with it. I have had access to Kindles and Nooks, and they never got used. I can put an application on my Windows notebook that reads the books, so why do I need to learn a new interface? It's called being lazy, I know it is, but I have no personal reason to care about Apple products. It's nothing personal.

If one of my family members wants to buy a luxury car, I will be happy to ride in it. If money were no object, tomorrow I would go out and buy a new Ford F-150 pick up truck that best suited my needs.

I don't get emotionally attached to my computers or automobiles. They are tools. Nothing more.

You too can understand computer buzzwords

Since 1998, has attempted to provide self help and tutorials for learning basic computer and networking technology concepts, maintaining the theme, "Geek Speak Made Simple." Recently I updated the Drupal content management software for Computerguru and updated a few pages.

Based on commonly asked questions, I have added several new pages to the section Common technology questions and basic computer concepts. On computer operating systems we have added an article that explains the major differences between desktop computer operating systems and one on installing Linux and understanding all the different Linux distributions.

I get a lot a questions on computer cables and finally finished up this article on Ethernet computer network cable frequently asked questions answered and an article explaining computer network modular connectors and telephone registered jacks.

And based on many questions on printers, we had some fun coming up with this article, the ugly truth about computer printers.

Yes, I know that sounds like a lot of geek speak, but we do our best to break it all down into small bite sized chunks, so it is easy to digest.  Please take a few minutes to check out the new content, and please share it with your geek friends on social media.

Any topics need covered? Any questions missing?

Are there any buzzwords bothering you?  Something else you would like us to cover here at the Guru 42 Universe?  Let us know: Guru 42 on Twitter -|- Guru 42 on Facebook -|- Guru 42 on Google+ -|- Tom Peracchio on Google  



Wireless Networks in Simple Terms WLAN and Wi-Fi defined

ComputerGuru -

The term Wi-Fi is often used as a synonym for wireless local area network (WLAN). Specifically the term "Wi-Fi" is a trademark of a trade association known as the Wi-Fi Alliance. From a technical perspective WLAN technology is defined by the Institute of Electrical and Electronics Engineers (IEEE).

In computer networking everything starts with the physical layer, which for many years was a copper wire. The physical layer was expanded to include anything that represent the wire, such as fiber optic cable, infrared or radio spectrum technology.

Wireless network refers to any type of computer network that is not connected by cables of any kind. While cell phone technology is often discussed as a form of wireless networking, it is not the same as the wireless local area network (WLAN) technology discussed here.

What is Wi-Fi?

The term Wi-Fi has often been used as a technical term to describe wireless networking. Wi-Fi is actually a trademark of the Wi-Fi Alliance, a global non-profit trade association formed in 1999 to promotes WLAN technology. Manufacturers may use the Wi-Fi trademark to brand products if they are certified by The Wi-Fi Alliance to conform to certain standards.

A common misconception is that Wi-Fi is an acronym of Wireless fidelity, it is not. The Wireless Ethernet Compatibility Alliance wanted a cooler name for the new technology as the IEEE 802.11b Alliance was not all that catchy. The marketing company Interbrand, known for creating brand names, was hired to create a brand name to market the new technology, and the name Wi-Fi was chosen. The term 'Wi-Fi' with the dash, is a trademark of the Wi-Fi Alliance.

IEEE 802.11 defines WLAN technology

The actual technical standards for wireless local area network (WLAN) computer communication are know as IEEE 802.11. IEEE refers to the Institute of Electrical and Electronics Engineers a non-profit professional association formed in 1963 by the merger of the Institute of Radio Engineers and the American Institute of Electrical Engineers.

IEEE 802 refers to a family of IEEE standards dealing with networks carrying variable size packets, which makes it different from cell phone based networks, 802.11 is a subset of the family specific to WLAN technology. Victor "Vic" Hayes was the first chair of the IEEE 802.11 group which finalized the wireless standard in 1997.

This link takes you to the 802.11 specification that contains all the geek speak on how it works. --> IEEE-SA -IEEE Get 802 Program

How fast is Wi-Fi?

Wi-Fi speed is rated according to maximum theoretical network bandwidth defined in the IEEE 802.11 standards.

For example:

IEEE 802.11b - up to 11 Mbps

IEEE 802.11a - up to 54 Mbps

IEEE 802.11n - up to 300 Mbps

IEEE 802.11ac - up to 1 Gbps

IEEE 802.11ad - up to 7 Gbps

If you look at the IEEE 802.11 Wireless LANs standards you will see the ongoing evolution with several standards under development at this time to increase speeds even more.

Keep in mind that WiFi speed is how fast your internal network is, as in wireless LANs (Local Area Network)

Fast WiFi does not mean fast internet connection, it has nothing to do with the speed or bandwidth of you internet access.

How does Wi-Fi work?

A Wi-Fi enabled device such as a personal computer or video game console can connect to the Internet when within range of a device such as a wireless router connected to the Internet. wireless local area network (WLAN) technology allows your device to connect to the router, which in turn connects you to the internet.  In order to connect to the internet, you need a unique IP (internet protocol) address. On your home network, when your router is connected to the internet, it has a public address, that is the one that faces the internet, and is unique in relationship of other routers on the internet.

Your router also has a local IP Address of something like and this is a private IP address space. Addresses beginning with 192.168 cannot be transmitted onto the public Internet and are typically used for home local area networks (LANs). If you have four home computers, your router creates a home network and the four home computers have a unique number in relationship to each other. Your local computers connect to the router, either by a wire plugged into the router, or through a wireless signal.

Routers are used to create logical borders between networks, and in this allow a gateway, such as an access point to the internet to be shared. In geek speak terms subnetting can be very complex, but what is happening here is the process know as subnetting.


The ugly truth about computer printers

ComputerGuru -

The printer is the source of pain and problems for every computer user.  The ugly truth about computer printers is that everyone has one and they all stink.

A printer is very mechanical, there are a lot of moving parts.  Every printer from the very simplest, to the most complex, has numerous gears, springs, and rollers that all need to move in perfect harmony in order for your printer to work.  

In understanding why computer printers are a source of frustration, let me explain some of the other components of a typical computer system. On your home desktop computer you have a large box that everything plugs into. I hear people call this box a CPU, some call it a hard drive.  Technically the CPU is one small part on the main circuit board that sits inside that box.  The main circuit board, as well as the CPU and memory modules that plug into are solid state, that means they are all electronic. Unless you get hit with a power surge or some external electrical issue, it is rare that the electronics of a computer wears out over time. Even hard drives that once were very mechanical are now becoming solid state, which means no moving parts and much more reliable.

Same thing with your display, what we used to call a monitor.  Back in the days of CRT Monitors, the CRT (Cathode Ray Tube) wore out over time, it degraded because it heated up. In my experiences over the years I've seen some monitor failures. Not so much with modern displays, like the computer itself, they are now all electronic and less likely to degrade over time.

Things like keyboards and mice still have a few mechanical parts to them, but they don't wear out often.  When they do wear out, they are simple to replace, and people don't get too excited when they need replaced.

But alas, the printer, the pain of every computer user.  You just typed that report and you need it now.  You are leaving for the movies and you want to print the tickets, and the printer won't work.  There is never a convenient time for the printer to break.  

Even the simplest of printers has a handful of gears, springs, and rollers, that wear out over time.  The paper tray gets banged around every time you fill it up.  Every time someone takes out a paper tray, they bend something, they twist something, a part gets knocked off.  With the need to lower the cost of the printers, many of these mechanical parts are made from very low quality metal and plastic.

And here is one element of printers that many people over look, the paper.  When the air gets dry, when the heat is on in the winter, the paper gets full of static electricity, so it jams more often.  Instead of taking the paper out of the tray, fanning it a bit, flipping it over, you bang the paper tray a few times.  Maybe you yank the paper out when it jams, bending and stretching the metal arms and guides on the paper tray.

When the weather is damp and humid, that will also cause the paper to jam. Do you close the wrapper on your paper when it is just laying around?  Or is it just thrown on a shelf outside the wrapper?  I have seen many print quality issues caused by paper. Having spent a long career in office automation and computer networking I could write a book on the subject of printer problems because of paper.  The hardest part in answering this was keeping it brief.

Types of printing technology

Another issue you have with printers is consumable supplies like ink and toner. Every freaking printer model has its own unique ink or toner cartridge.  When you try to save money by refilling cartridges it is a crap shoot.  More often than not I have seen refilled cartridges cause many problems.

In the early days of desktop computers the dot matrix printer was the standard.  They could be pretty noisy as the small needles in the print head fired through the ribbon creating dots of information on your paper. Ribbons faded over time, and copy quality was not great, but printer ribbons were fairly inexpensive compared to modern ink cartridges. The boxes of paper with the tractor feed holes seems a little primitive compared to the plain paper printers of today, but in many ways the tractor feed paper was a more problem free solution than many of the modern printers with paper trays.

Inkjet printers began replacing dot matrix printers offering higher quality. A less noisy printer with higher quality could be a blessing, instead the inkjet technology was more of a curse. The color inkjet printer uses multiple color ink cartridge that includes a print head as a part of a replaceable ink cartridge that adds to the expense of the cartridge. The cartridges themselves have very narrow inkjet nozzles that are prone to clogging, and they dry out over time. New technology intelligent ink cartridges that communicate with the printer add another level of complexity, and another potential point of failure

Laser printers have been around since the very early days of desktop computers. They are high quality printers, but were for many years, very high cost.  In the early days it was rare to have a laser printer on your home computer, but over the years the quality has increased, and the price has dropped dramatically.  You can get a low cost black print laser printer for less than a hundred dollars. That is what I have in my home office, I have given up on low cost ink jet printers. Most of the times I use my home office laser printer to print a document such as a receipt, or maybe my tickets for a movie or sporting event, I don't need color for that.

The price of a laser printer toner cartridge sounds expensive, the last one I replaced was over $50, but they last ten times longer than an ink jet cartridge. If you look at it on a cost per copy basis, a laser printer is significantly cheaper to own than an ink jet. If I really need a high quality color copy, I can take a document on a USB drive to a local shop and get one there.

Prices have been dropping in recent years, and color laser printers cost a fraction of what they once cost.  If you need a color printer and print more than a few copies a month, do some calculations on the cost per copy of a color laser printer.  You might be surprised to see that over the long haul a color laser printer is not as expensive to own as an ink jet.

It's not your fault for buying a crappy printer

Between having a home computer system as well as working in the field of office automation and business machines since the early 1980s, I have worked with numerous brands of printers and printing equipment. It is hard to recommended a specific brand or specific model of printer at any time because they are constantly changing. In a marketplace that is always shopping for low cost, often a manufacturer will cut corners to lower costs, and a usually reliable brand will have some really horrible models.  

We are discussing the computer printer here as a hardware device, but software issues such as finding the proper drivers for your current computer operating and getting Wi-Fi to work on your network can also create problems. Shop wisely, read over consumer reviews of the currently popular printers to see the potential problems for a model you are considering buying.

The primary reason for a printer being the most likely part of your computer system to cause you pain comes down to the printer having the most moving parts, but there are also many other issues dealing with the supplies such as paper, ink, and toner. Maybe you won't feel any better about all the printing problems you are having after reading this article, but at least you will know, it's not your fault for buying a crappy printer, they all stink.


Buzzwords from the world wide web to deep web and dark net

Guru 42 Universe -

There are a lot of definitions that get thrown around about “the deep web” and “the dark web.” It is frustrating how people use the terms without a clue as to what they mean. The deep web and dark web are NOT synonyms!

Starting with defining "The Internet," think of all the wires and connections as a highway system. When I talk about the general term of the internet, I am speaking about the technologies that move packets of information along wires from one destination to another, specifically the family of protocols known as TCP/IP (transmission control protocol - internet protocol).

The "World Wide Web” represents the many destinations that are connected together using the public highway system of the internet. When I talk about the general term of the World Wide Web, I am speaking about the technologies that create websites and webservers such as HTTP (hypertext transfer protocol) and HTML (hypertext markup language).

Where it gets confusing is how you apply the usage of the terms. Sometimes when people say "the internet" they are not describing just the highway system, but they are using the term to represent all the websites in existence. Likewise, often when people say “The World Wide Web” they use it to mean all the websites in existence.

The technology that the internet uses on the public highway, things like the internet protocols like TCP/IP and World Wide Web components HTTP and HTML, can also be used to take us to private destinations as well. This collection of private destinations is known as the "Deep Web." Computer scientist Michael Bergman, founder of search indexing specialist company Bright Planet is credited with coining the term deep web in 2001 as part of a research study.

In 2014, a Forbes article, "Insider Trading On The Dark Web"(1), completely confuses the terms and misquotes the definitions of BrightPlanet CEO Michael Bergman and incorrectly describes Bright Planet as "a firm that harvests data from the Dark Web." In response to confusion about the terms Deep Web versus Dark Web BrightPlanet published the article, "Clearing Up Confusion – Deep Web vs. Dark Web." (2)

The link to the BrightPlanet article is listed at the end of this article, but here are a few points from that article which define the main points.

- "The Surface Web is anything that can be indexed by a typical search engine like Google, Bing or Yahoo."
- "...the Deep Web is anything that a search engine can’t find."
- "The Dark Web then is classified as a small portion of the Deep Web that has been intentionally hidden and is inaccessible through standard web browsers."
- "The key thing to keep in mind is the Dark Web is a small portion of the Deep Web."

Why does the "deep web" have much more content than the "regular web" since it's used by far fewer people?

Here's an analogy that might help you understand why there is so much more information "below the surface" on private networks, than above the surface on public networks.

Go to the downtown of an average city where you can find a variety of commercial office buildings. Some of the buildings have a lobby, where you can go inside and walk around. Some buildings might actually have a common area where the general public can walk around freely and access various bits of information, like the lobby of a bank or insurance company. But on the floors above the lobby are offices which require special privileges to access, you must have a need to get into these rooms.

Likewise, you might have a government building where the first floor might contain a post office or some other public service agency that anyone can access. But the floors above it could contain other types of offices where admission is restricted, or accessed by invitation only.

In your downtown area, how many of the buildings can you walk around freely, and how many have controlled access? Are there buildings that you can not walk around in at all because they are privately owned and don't allow access to the general public?

I could expand the analogy further, but hopefully you start to see that in the "real world" of your downtown area there will places that are open to the public, and other areas with various degrees of access limitations. Likewise in the virtual world of the web, there there will places that are open to the public, and other areas with various degrees of access limitations.

The deep web does not mean some dark and mysterious place of evil, it is simply a term describing an area of controlled access rather than free and open access.

What is the dark web and how do you access it?

Going back to the analogy that the deep web represents the buildings in your town that don't allow access to the general public, the dark web represents all the back alley doorways that are not clearly marked and are accessed by knowing what to say to the doorman to gain access to what is inside.

The worldwide network known as “the dark web”uses specially configured servers designed to work with custom configured web browsers with the purpose of hiding your identity. You will see the term Tor servers and web browers to describe this private network. Tor originally stood for "The Onion Router."

Tor receives funding from the American government but operates as an independent nonprofit organization. The dark web is an interesting place as described in a Washington Post article that explains how the NSA is working around the clock to undermine Tor's anonymity while other branches of the federal government are helping fund it.(3)

A Wired article explains how WikiLeaks was launched with documents intercepted from Tor.(4) You can follow this link to an interview with former government contractor Edward Snowden (5) explaining how Tor is used to create private communications channel.

What can you find on the dark net?

The mysterious dark web, sometimes called the dark net, is the fuel for spy movies. it helped to create WikiLeaks run by the super spy Julian Assange and it allows cyber snitches like Edward Snowden share secret information.

Because the dark net is hidden, and the people that are hiding are doing their best not to be found, knowing the what goes on in the dark can be as mysterious as the name implies. For example one study that claims that nearly half of the sites on the dark net are not doing anything illegal.(6) But a different study that claims that 80% of dark net traffic is related to child abuse and porn sites.(7)

Various names have been used to describe the dark net such as the black internet, to suggest it is the home of online black markets. And the claims of the black internet are supported when a well know online drug black market gets busted. (8)

But does anyone really know what we could find on the dark net? What could you find in your city if you started knocking on doors in dark alleys? Would you want to guess?

Learn more:

Internet and World Wide Web visionaries ponder surviving world war

Who invented the world wide web?


(1) Insider Trading On The Dark Web

(2) Clearing Up Confusion – Deep Web vs. Dark Web.

(3) The NSA is trying to crack Tor. The State Department is helping pay for it.

(4) WikiLeaks Was Launched With Documents Intercepted From Tor

(5) This is What a Tor Supporter Looks Like: Edward Snowden

(6) Research suggests the dark web is not as dark as we think

(7) Study claims more than 80% of 'dark net' traffic is to child abuse sites

(8) "End Of The Silk Road: FBI Says It's Busted The Web's Biggest Anonymous Drug Black Market"


Everything you need to know about Ethernet and computer cabling

Guru 42 Universe -

The concepts of Ethernet and computer network cabling are so full of buzzwords and geek speak. We wanted to break down the jargon into bite sized chunks to help you understand the concepts. 

Everything in computer networking starts at the physical layer, that's where the wires plug into the boxes with blinking lights. Because Ethernet deals with wires at the physical layer, at times Ethernet becomes a generic word for any type of wire associated with a computer network.

We created this section of business success beyond the technology buzzwords at the Guru 42 Universe based on conversations we had with business professionals as well as technology professionals. In discussing technology from the perspective of a business owner or business manager we realize you don't have time to become a network engineer, but we also understand your frustration in understanding all the buzzwords. With those thoughts in mind we created this introductory page on defining the term Ethernet and explaining computer network cabling.

In designing ComputerGuru we break down the topics from the perspective audience of the person asking the questions. At our ComputerGuru site we have the section, Common technology questions and basic computer concepts, which is aimed at the typical home computer user.  

Even a non technical casual user of a personal computer has probably heard of the term Ethernet from time to time.  Likewise, the typical computer user has probably misplaced a piece of wire used to connect their computer and went off in search of a network cable.  As an introduction to Ethernet and computer network cabling we have created the following pages: Ethernet computer network cable frequently asked questions answered and Computer network modular connectors and telephone registered jacks.

The strict technical definition of Ethernet is a physical and data link layer technology for local area networks (LANs). If you want to dig deeper into the technology, in our section targeted to learning computer networking technology we have the section, Basic network concepts and the OSI model explained in simple terms.  In that section The Physical Layer of the OSI model discusses the more technical terms of data communications.  The concept of Ethernet is more than just defining wires and connections, and that is discussed as part of the The Data Link Layer of the OSI model.

Any topics need covered? Any questions missing?

Are there any buzzwords bothering you?  Something else you would like us to cover here at the Guru 42 Universe?  Let us know: Guru 42 on Twitter -|- Guru 42 on Facebook -|- Guru 42 on Google+ -|- Tom Peracchio on Google  


Computer network modular connectors and telephone registered jacks

ComputerGuru -

The plastic plugs on the ends of telephone wiring and computer cables are defined by various technical standards. Because these standards are full of technical definitions and acronyms, it is easy to see how street slang becomes the accepted definition for many of the plastic plugs.

It is important to understand that connecting devices together is more than just matching up connector ends on a piece of wire. Just because you can find an adapter to make your cable fit into a connection is no guarantee that the device will communicate on your network. Some connectors that look exactly alike could have different wiring configuration.

In the world of technology street slang, or common buzzwords, often become the accepted the description of something rather than the specific technology standard. For example describing Ethernet patch cables as using RJ45 connectors illustrates one of the most mis-used terms in the world of technology.

We will do our best to break down some of the buzzwords and jargon to help you understand the differences in the terms.

Modular connectors

A modular connector is an electrical connector that was originally designed for use in telephone wiring, but has since been used for many other purposes. Many applications that originally used a bulkier, more expensive connector have converted to modular connectors. Probably the most well known applications of modular connectors are for telephone jacks and for Ethernet jacks, both of which are nearly always modular connectors.

Modular connectors are designated with two numbers that represent the quantity of positions and contacts, for example the 8P8C modular plug represents a plug with having eight positions and eight contacts.

Do not assume that connectors that look the same are wired the same. Contact assignments, or pin outs, vary by application. Telephone network connections are standardized by registered jack numbers, and Ethernet over twisted pair is specified by the TIA/EIA-568 standard.

Telephone industry Registered Jack

A Registered Jack (RJ) is a wiring standard for connecting voice and data equipment to a service provided by a telephone company. In some wiring definitions you will see references to the Local Exchange Carrier (LEC), which is a regulatory term in telecommunications for the local telephone company.

Registration interfaces were created by the Bell System under a 1976 Federal Communications Commission (FCC) order for the standard interconnection between telephone company equipment and customer premises equipment. They were defined in Part 68 of the FCC rules (47 C.F.R. Part 68) governing the direct connection of Terminal Equipment (TE) to the Public Switched Telephone Network (PSTN).

Connectors using the distinction Registered Jack (RJ) describe a standardized telecommunication network interface. The RJ designations only pertain to the wiring of the jack, it is common, but not strictly correct, to refer to an unwired plug by any of these names.

For example, RJ11 is a standardized jack using a 6P2C (6 position 2 contact) modular connectors, commonly used for single line telephone systems. You will often see telephone cables with four wires used for common analog telephone referred to as RJ11 cables. Technically speaking RJ14 is a configuration for two lines using a six-position four-conductor (6P4C) modular jack

RJ45 is a standard jack once specified for modem or data interfaces using a mechanically-keyed variation of the 8P8C (8 position 8 contact) body. Although commonly referred to as an RJ45 in the context of Ethernet and category 5 cables, it is incorrect to refer to a generic 8P8C connector as an RJ45.

Why is a Ethernet eight-pin modular connector (8P8C) not an RJ45?

Both twisted pair cabling used for Ethernet and the telecommunications RJ45 use the 8P8C (Eight Position, Eight Contact) connector, and there lies the confusion and the misuse of the terms. The 8P8C modular connector is often called RJ45 after a telephone industry standard. Although commonly referred to as an RJ45 in the context of Ethernet and Category 5 cables, it is incorrect to refer to a generic 8P8C connector as an RJ45

The 8P8C modular connector is often called RJ45 after a telephone industry standard defined in FCC Part 68. The Ethernet standard is different from the telephone standard, TIA-568 is a set of telecommunications standards from the Telecommunications Industry Association (TIA). Standards T568A and T568B are the pin - pair assignments for eight-conductor 100-ohm balanced twisted pair cabling to 8P8C (8 position 8 contact) modular connectors.

How does a RJ45 to RJ11 converter work?

There is no such thing as a RJ45 to RJ11 converter. They are two different types of connectors for two totally different standards of communication. Cables with various pin configurations and wire pairs are created for specific purposes. Be careful when looking to "convert" on type of wire into another. An adapter that allows you to connect an RJ11 plug into an RJ45 plug is not converting anything.

Technically speaking neither RJ11 or RJ45 is a computer networking standard. Many times when people are looking to convert between RJ11 and RJ45 they are dealing with a device made for a two wire phone line and trying to connect it to an Ethernet eight-pin (8P8C) unshielded twisted-pair (UTP) modular connectors.

I see many questions on internet forums asking about various adapters and converters. Just because you can convert a plug from one type to another does not mean that the signal traveling along the wire will work as you expect. I can not stress enough the importance of not using any type of adapters and converters without knowing the exact wiring configuration of the devices you are trying to connect.


Ethernet computer network cable frequently asked questions answered

ComputerGuru -

You will often hear a common computer network patch cable called an "Ethernet cable." While most modern local area networks (LAN) use the same type of cable, the term Ethernet is a family of computer networking technologies that defines how the information flows through the wire, but does not define the physical network cable.

The standards defining the physical layer of wired Ethernet are known as IEEE 802.3, which is part of a larger set of standards by the Institute of Electrical and Electronics Engineers Standards Association.

Cable types, connector types and cabling topologies are defined by TIA/EIA-568, a set of telecommunications standards from the Telecommunications Industry Association (TIA). The standards address commercial building cabling for telecommunications products and services.

Computer network cabling

Twisted Pair Cabling is a common form of wiring in which two conductors are wound around each other for the purposes of canceling out electromagnetic interference which can cause crosstalk. The number of twists per meter make up part of the specification for a given type of cable.

The two major types of twisted-pair cabling are unshielded twisted-pair (UTP) and shielded twisted-pair (STP). In shielded twisted-pair (STP) the inner wires are encased in a sheath of foil or braided wire mesh. Unshielded twisted pair (UTP) cable is the most common cable used in modern computer networking.

What does Cat5 Cable mean?

A Category 5 cable (Cat5 cable) is made up of four twisted-pair wires, certified to transmit data up to 100 Mbps. Category 5 cable is used extensively in Ethernet connections in local networks, as well as telephony and other data transmissions.

Cat5 Cable has been the standard for homes and small offices for many years. As technology for twisted pair copper cabling has progressed, successive categories have given buyers more choices. Category 5e and Category 6 cable offer more potential for bandwidth and better potential handling of signal noise or loss. Newer cable types also help to deal with the issue of cross talk or signal bleeding, which can be problems with unshielded twisted pair cabling.

The category 5e specification improves upon the category 5 specification by revising and introducing new specifications to further mitigate the amount of crosstalk.The bandwidth (100 MHz) and physical construction are the same between the two.

The category 6 specification improves upon the category 5e specification by improving frequency response and further reducing crosstalk. The improved performance of Cat 6 provides 250 MHz bandwidth and supports 10GBASE-T (10-Gigabit Ethernet). The Cat 6 cable is fully backward compatible with previous versions, such as the Category 5/5e

Older versions of voice and data cable

Category 1 Traditional UTP telephone cable can transmit voice signals but not data. Most telephone cable installed prior to 1983 is Category 1. Category 2 UTP cable is made up of four twisted-pair wires, certified for transmitting data up to 4 Mbps. Official TIA/EIA-568 standards have only been established for cables of Category 3 ratings or above.

Category 3 was widely used in computer networking in the early 1990s for 10BASE-T. In many common names for Ethernet standards the leading number (10 in 10BASE-T) refers to the transmission speed in Mbit/s. BASE denotes that baseband transmission is used. The T designates twisted pair cable.

Category 4 cable consists of four unshielded twisted-pair (UTP) copper wires used in telephone networks which can transmit voice and data up to 16 Mbit/s. Category 4 cable is not recognized by the current version of the TIA/EIA-568 data cabling standards.

What does Patch Cable mean?

A patch cord, also called a patch cable, is a length of cable with connectors on each end that is used to connect one electronic device to another. In computer networking what people often call an “Ethernet Cable” is Unshielded Twisted-Pair (UTP) patch cable.

What does Straight-Through Cable mean?

A straight-through cable is a standard patch cable used in local area networks. Straight-through cables have the wired pins on one end match on the other end. In other words, pin 1 on one end is connected to pin 1 on the other end, and the order follows the straight through route from pin 1 through pin 8.

What is a Crossover Cable?

A crossover cable is used for the interconnection of two similar devices. It is enabled by reversing the transmission and receiving pins at both ends, so that output from one computer becomes input to the other, and vice versa. The reversing or swapping of cables varies, depending on the different network environments and devices in use.

This type of cable is also sometimes called a and is an alternative to wireless connections where one or more computers access a router through a wireless signal. Use a straight-through cable when connecting a router to a hub, a computer to a switch, or connecting a LAN port to a switch, hub, or computer.

Why do you need a crossover cable?

A traditional port found in a computer NIC (network interface card) is called a media-dependent interface (MDI). A a traditional port found on an Ethernet switch is called a media-dependent interface crossover (MDIX), which reverses the transmit and receive pairs. However, if you want to interconnect two switches, where both switch ports used for the interconnection were MDIX ports, the cable would need to be a crossover cable.

Introduced in 1998, Auto MDI-X made the distinction between uplink and normal ports and manual selector switches on older hubs and switches obsolete. Auto MDI-X automatically detects the required cable connection type and configures the connection appropriately, removing the need for crossover cables.

Gigabit and faster Ethernet links over twisted pair cable use all four cable pairs for simultaneous transmission in both directions. For this reason, there are no dedicated transmit and receive pairs, and consequently, crossover cables are never required.


Installing Linux defining distros which version should you choose

ComputerGuru -

In April 1991, Linus Torvalds, at the time a 21 year old computer science student at the University of Helsinki, Finland, started working on some simple ideas for an operating system. Although the desktop computer market exploded throughout the 1990s, the Linux Operating System remained pretty much the domain of geeks who like to build their own computers. I really believed that more than 20 years later we would have Linux computers in our home as common as Windows or Apple varieties.

The only dent in the domination of Windows or Apple desktop computers in recent years has been the introduction of the Chromebook as a personal computer in 2011. The Chrome operating system is a strange mix of the Linux kernel and using the Google Chrome web browser as a user interface.

The Linux operating system has come a long way since the mid 1990s. From painful experiences with using floppy disks and hunting down hardware drivers, my experiences with installing many distributions of Linux in recent years has been pretty painless.

The Linux kernel

Just as I did with answering the question, "what is the best desktop computer operating system," I am going to generalize a bit here so we don't get too deep into the geek speak. Hopefully the tech purists won't beat me up too much for generalizing. Let's begin with quickly going over the basic definitions.

Think of the Linux kernel as an automobile engine and drive train that was designed by a community. Once the engine and drive train have been developed there are groups that split off and design their own version of an automobile. Each of these automotive design groups have their own community with goals for how they want to use their finished product, some may focus on style and looks, another group may want to focus on being practical and functional. Once the group has a general purpose in mind, they will form an online community where they can share ideas in creating a finished product.

The Linux Distro

Each customized version of Linux that adds additional modules and applications is supported by an online community offering internet downloads as well as support. You will see the question phrased as which Linux distro should you use. Distro is a shortened version of the term distribution. There are many distros of the Linux family all based on the same Linux kernel, the core of the computer operating system. There are geeks who swear by which is the best Linux distro, but in the end it is a matter of what works best for you.

When it comes to comparing the various distributions, I find "the big three" to be very similar, because in reality they are variations of the same family. As of the time of this update, March 2017, based on various statistics the most popular version of Linux is Mint, with Debian coming in second, followed by Ubuntu. Mint is a fork from Ubuntu, which is itself a fork from Debian. Mint is very similar indeed to Ubuntu. Mint was forked off Ubuntu with the goal of providing a familiar desktop graphical user interface.

First answer the question, why are you looking at Linux? Do you have an old computer with an outdated operating system that you are looking to upgrade? Or perhaps you just want to see what all the fuss is about with the "free" alternative to Windows or Apple?

If you simply want to play with Linux and just want to see what all the fuss is about, Mint is a very easy place to start. I have installed Mint on a few old computers with no issues. One of the biggest issues I have experienced with many versions of Linux is the lack of drivers for certain pieces of hardware in some laptop models. There's a few old Dell laptops I moved on from installing Linux because finding drivers for the Wi-Fi was not worth the effort.

Here's a look at various distributions of Linux.

In our previous question on "what is the best desktop computer operating system" we addressed the topic of the "free" alternative to Windows or Apple as we explained Open Source software. Richard Stallman, the father of the Open Source software movement, explains that Open Source refers to the preservation of the freedoms to use, study, distribute and modify that software, not zero-cost. In illustrating the concept of Gratis versus Libre, Stallman is famous for using the sentence, "free as in free speech not as in free beer." Even though Linux is open source there are versions that are commercially distributed and supported.

Fedora - Red Hat

Red Hat Commercial Linux, introduced in 1995, was one of the first commercially supported versions of Linux, and entered into the enterprise network environment because of its support. Red Hat Linux has evolved quite a bit over the years as Red Hat Linux merged with the community based Fedora Project in 2003.

Fedora is now the free community supported home version of Red Hat Linux. Fedora ranks slightly behind the other distros we mention here in popularity, Fedora is often at the top of list when it comes to integrating new package versions and technologies into the distribution. Many users in the enterprise environment rave about the stability of Fedora.


openSUSE claims to be "the makers' choice for sysadmins, developers and desktop users." You may not find a lot of neighborhood geeks telling you to try openSUSE but it ranks near the top of many charts as far as popularity. SUSE was marketing Linux to the enterprise market in 1992, before Red Hat. Many American geeks are not as familiar with SUSE because it was developed in Germany. I have not had any issues with installing it. You can always download a "live CD" which allows you to run the operating system off of the CD without having to install it

openSUSE is the open source version. SUSE is often used in commercial environments because professional help is available under a support contract through SUSE Linux. Having worked as a Novell Netware systems administrator I was involved with SUSE Linux as the Novell Netware network operating system was coming to the end of its life when Novell bought the SUSE brands and trademarks in 2003. When Novell was purchsed by The Attachmate Group in 2011, SUSE was spun off as an independent business unit. SUSE is geared for the business environment with SUSE Linux Enterprise Server and SUSE Linux Enterprise Desktop. Each focuses on packages that fit its specific purpose.

Debian - Ubuntu - Mint

Ubuntu and Mint are Debian-based: their package manager is APT (The Advanced Package Tool) a free software user interface that works with core libraries to handle the installation and removal of software on the Debian Linux distributions. Their packages follow the DEB (Debian) package format.

Ubuntu is often used in commercial environments because professional help is available under a support contract through Canonical, the company behind Ubuntu.

Mint is basically the same OS as Debian or Ubuntu with a different default configuration with a lot of pre-installed applications and a nice looking desktop. Mint was forked off from the Ubuntu community with the goal of providing a familiar desktop Operating System.  If you are looking for something to use as a server Debian or Ubuntu may be a better choice.

What about all the rest?

There are more that 200 different versions of Linux. Once you go beyond the versions mentioned here you are getting into support issues. With each of the three families of Linux we mention here, there is a commercially supported version and a community supported version. Keep in mind, if you are not buying support through one of the commercial versions mentioned here, each of these families have a well established online community for support of the open source version.

Is it time to switch to Linux?

Back in the late 1990s I was taking a community college course on Novell networking and systems administration using Novell Netware. As part of the curriculum we had to write a term paper on a unrelated technology topic, I chose Linux on the desktop. I concluded that I was impressed with Linux as an operating system, but it would not become mainstream desktop operating system until there were hardware companies embracing it and selling home computers with Linux installed. Twenty years later, that really has not happened.

You could make the case that the Google Chromebook is a version of Linux installed and configured along with a computer, but the Google Chromebook has not become a mainstream home computer. If all you want to do is surf the net, interact on social media, and read your email, a Google Chromebook works fine. But beyond that there are many issues.

Hardware drivers and website plugins can be a problem when using any version of Linux. Many manufacturers don't develop Linux device drivers for their hardware, you need to search them out yourself through your LInux community. Using many websites that need Digital Rights Management, like Amazon Video, Netflix, or Sling, getting your streaming to work on Linux can be difficult. Some websites don't understand Linux as an operating system and automatic installs of plugins fail.

I know I said at the beginning of this discussion that in recent years my experinece in installing Linux has been pretty painless, but I have access to name brand hardware on pretty basic computers.  The problem with hardware drivers and browser plug ins keeps improving, but beware it can be an issue at times.  It is still a concern that can turn your Linux experince sour. The biggest problem I have experienced in experimenting with Linux is network card and WiFi drivers in laptop models.

In our last article we discussed why is Microsoft Windows so popular. Whether you love them or hate them, many applications only have a Windows version. There are many websites that offer "open source equivalents” to your favorite applications. Some equivalents work well, others are very buggy. The key to using any open source application is looking at how active is the community that supports them. Be cautious of applications that look cool and work well, but are basically created and supported by a single individual. They can often become unsupported as developer creates an application and moves on without supporting it over time.

Take Linux for a test drive

Look for a live distribution of Linux that allows you to run a full instance of the operating system from either CD, DVD, or USB, without making changes to your current system. Many install downloads will offer you a live test drive of the distro that does not install anything to your hard drive. If everything works well from a live test drive, you can feel a bit more comfortable about doing the "real" install.


Desktop personal computer system basic parts defined

ComputerGuru -

If you are studying personal computers as the beginning of your career in technology, or perhaps you are just trying to understand how things work on your home computer to better deal with problems and upgrades, you can't get away with not knowing some very basic definitions of the components of a desktop personal computer system.

Computer hardware is the collection of physical elements that make up a computer system such as a hard disk drive (HDD), monitor, mouse, keyboard, CD-ROM drive, network card, system board, power supply, case, and video card.

The main system board is sometimes called the motherboard. It is the central printed circuit board (PCB) in and holds many of the crucial components of the system, providing connectors for other peripherals.

The central processing unit (CPU), the brain of a computer system is the main component on the main system board. The CPU carries out the instructions of computer programs, performs the basic arithmetical, logical, and input/output operations of the system.

System boards will have expansion slots, a CPU socket or slot, location for memory cache and RAM, and a keyboard connector. Other components may also be present. A slot is a narrow notch, groove, or opening. A socket is a hollow piece or part into which something fits. Systemboards contain both sockets and slots, which are the points at which devices can be plugged in. A CPU slot is long and narrow while a CPU socket is square.

RAM (Random Access Memory), is the computer's primary storage which holds programming code and data that is being processed by the CPU.

A hard disk drive (HDD) is called secondary storage while memory is called primary storage because programs cannot be executed from secondary storage but must first be moved to primary storage. Basically, the CPU cannot "reach" the program still in secondary storage for execution.

ROM is read-only memory. ROM chips, located on circuit boards, are used to hold programming code that is permanently stored on the chip.

Flash ROM can be reprogrammed whereas regular ROM cannot be. In order to change the programming code of regular ROM, the chip must be replaced. Upgrades to Flash ROM can be downloaded from the Internet.

BIOS stands for basic input-output system. It is used to manage the startup of the computer and ongoing input and output operations of basic components, such as a floppy disk or hard drive.

Computer software is a collection of computer programs and related data that provide the instructions for telling a computer what to do.

System software provides the basic functions for computer usage and helps run the computer hardware. An operating system is a type of software that controls a computers output and input operations, such as saving files and managing memory. Common operating systems are typically Windows based, but personal computers can also use an Apple or Linux based operating system as well.

Application software is computer software designed to perform specific tasks. Common applications include word processing such as Writer, a spread sheet such as Microsoft Excel, and business accounting such as Quick Books by Intuit.

What is the difference between a PC (personal computer) and a workstation

In a business environment you may have a computer on your desk that is very similar to the computer you have at home, but there is one major difference, the work computer is managed as part of a LAN (local area network) that contains many other computers. In the next section we define networking terms and go into a bit more detail on the concept of a LAN.

Some definitions will state that a workstation computer is faster and more powerful than a personal computer. Not necessarily. Terms like "faster and more powerful" are pretty ambiguous. The difference is a bit more clear-cut, it is a point of reference in how they are used.

In your home you have a personal computer, it is the center of your personal technology universe. When you open up an application, it is on that computer. When you create a data file, like a Word document, you save it to that computer.

When you open up an application, it may be installed on your local computer, or it may be installed on an application server somewhere on your LAN. When you create a data file on your workstation, like a Word document, you save it to your personal directory on a file server that is on your LAN.

Many years ago when computer systems were expensive, all the work was done on a mainframe, a huge computer surrounded by geeks in a special room. The end users had dumb terminals, meaning there was a keyboard and a monitor at your desk, but the box they attached to on your desk was called a dumb terminal because it did not do any work, it was dumb!

The concept of the workstation is that some of the "work" is done locally at your desktop, but some of the work could also be done on a computer somewhere else, in the case of the LAN, that somewhere else would be a server.


The Data Link Layer of the OSI model

ComputerGuru -

The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking.  The Data Link layer deals with issues on a single segment of the network.

Layer two of the OSI model is one area where the difference between the theoretical OSI reference model and the implementation of TCP/IP with the competing Department of Defense (DoD) model. As we will discuss with the implementation of TCP/IP there is one lower layer called the network interface layer that encompasses Ethernet.

The IEEE 802 standards map to the lower two layers (Data Link and Physical) of the seven-layer OSI networking reference model. Even though we discussed many of these Ethernet terms in discussing the Physical Layer of the OSI model, we also discuss them here in the context of the Data Link Layer.

The IEEE 802 LAN/MAN Standards Committee develops Local Area Network standards and Metropolitan Area Network standards. In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). IEEE 802 splits the OSI Data Link Layer into two sub-layers named Logical Link Control (LLC) and Media Access Control (MAC).

The lower sub-layer of the Data Link layer, the Media Access Control (MAC), performs Data Link layer functions related to the Physical layer, such as controlling access and encoding data into a valid signaling format.

The upper sub-layer of the Data Link layer, the Logical Link Control (LLC), performs Data Link layer functions related to the Network layer, such as providing and maintaining the link to the network.

The MAC and LLC sub-layers work in tandem to create a complete frame. The portion of the frame for which LLC is responsible is called a Protocol Data Unit (LLC PDU or PDU).

IEEE 802.2 defines the Logical Link Control (LLC) standard that performs functions in the upper portion of the Data Link layer, such as flow control and management of connection errors.

LLC supports the following three types of connections for transmitting data:
• Unacknowledged connectionless service:does not perform reliability checks or maintain a connection, very fast, most commonly used
• Connection oriented service. Once the connection is established, blocks of data can be transferred between nodes until one of the nodes terminates the connection.
• Acknowledged connectionless service provides a mechanism through which individual frames can be acknowledged

IEEE 802.3 is an extension of the original Ethernet. includes modifications to the classic Ethernet data packet structure.

The Media Access Control (MAC) sub-layer contains methods that logical topologies can use to regulate the timing of data signals and eliminate collisions.

The MAC address concerns a device's actual physical address, which is usually designated by the hardware manufacturer. Every device on the network must have a unique MAC address to ensure proper transmission and reception of data.  MAC communicates with adapter card.

Carrier Sense Multiple Access / Collision Detection is (CSMA/CD) a set of rules determining how network devices respond when two devices attempt to use a data channel simultaneously (called a collision). Standard Ethernet networks use CSMA/CD. This standard enables devices to detect a  collision.

After detecting a collision, a device waits a random delay time and then attempts to re-transmit the message. If the device detects a collision again, it waits twice as long to try to re-transmit the message. This is known as exponential back off.

IEEE 802.5 uses token passing to control access to the medium. IBM Token Ring is essentially a subset of IEEE 802.5.

The IEEE 802.11 specifications are wireless standards that specify an "over-the-air" interface between a wireless client and a base station or access point, as well as among wireless clients. The 802.11 standards can be compared to the IEEE 802.3™ standard for Ethernet for wired LANs. The IEEE 802.11 specifications address both the Physical (PHY) and Media Access Control (MAC) layers and are tailored to resolve compatibility issues between manufacturers of Wireless LAN equipment

The IEEE 802.15 Working Group provides, in the IEEE 802 family, standards for low-complexity and low-power consumption wireless connectivity.

IEEE 802.16 specifications support the development of fixed broadband wireless access systems to enable rapid worldwide deployment of innovative, cost-effective and interoperable multi-vendor broadband wireless access products.

A network interface controller (NIC), also known as a network interface card or network adapter, implements communications using a specific physical layer and data link layer standard such as Ethernet. The 1990s Ethernet network interface controller shown in the photo has a BNC connector (left) and an 8P8C connector (right).



Physical Layer Topology in computer networking

ComputerGuru -

A network topology refers to the layout of the transmission medium and devices on a network. As a networking professional for many years I can honestly say about the only time network topology has come up is for certification testing. Here are some basic definitions.

Physical Topology:

Physical topology defines the cable's actual physical configuration (star, bus, mesh, ring, cellular, hybrid).

Bus: Uses a single main bus cable, sometimes called a backbone, to transmit data. Workstations and other network devices tap directly into the backbone by using drop cables that are connected to the backbone.  This topology is an old one and essentially has each of the computers on the network daisy-chained to each other. This type of network is usually peer to peer and uses Thinnet(10base2) cabling. It is configured by connecting a "T-connector" to the network adapter and then connecting cables to the T-connectors on the computers on the right and left. At both ends of the chain the network must be terminated with a 50 ohm impedance terminator.

Advantages: Cheap, simple to set up.
Disadvantages Excess network traffic, a failure may affect many users, Problems are difficult to troubleshoot.

Star: Branches out via drop cables from a central hub (also called a multiport repeater or concentrator) to each workstation. A signal is transmitted from a workstation up the drop cable to the hub. The hub then transmits the signal to other networked workstations.  The star is probably the most commonly used topology today. It uses twisted pair such as 10baseT or 100baseT cabling and requires that all devices are connected to a hub.

Advantages: centralized monitoring, failures do not affect others unless it is the hub, easy to modify.
Disadvantages If the hub fails then everything connected to it is down.

Ring: Connects workstations in a continuous loop. Workstations relay signals around the loop in round-robin fashion.  The ring topology looks the same as the star, except that it uses special hubs and ethernet adapters. The Ring topology is used with Token Ring networks, (a proprietary IBM System).

Advantages: Equal access.
Disadvantages Difficult to troubleshoot, network changes affect many users, failure affects many users.

Mesh: Provides each device with a point-to-point connection to every other device in the network.  Mesh topologies are combinations of the above and are common on very large networks. For example, a star bus network has hubs connected in a row(like a bus network) and has computers connected to each hub.

Cellular: Refers to a geographic area, divided into cells, combining a wireless structure with point-to-point and multipoint design for device attachment.

Logical Topology:

Logical topology defines the network path that a signal follows (ring or bus), regardless of its physical design.

Ring: Generates and sends the signal on a one-way path, usually counterclockwise.

Bus: Generates and sends the signal to all network devices.

LAN Media-Access Methods

Media contention occurs when two or more network devices have data to send at the same time. Because multiple devices cannot talk on the network simultaneously, some type of method must be used to allow one device access to the network media at a time. This is done in two main ways: carrier sense multiple access collision detect (CSMA/CD) and token passing.

In token-passing networks such as Token Ring and FDDI, a special network frame called a token is passed around the network from device to device.

For CSMA/CD networks, switches segment the network into multiple collision domains


The Internet Family of Protocols The TCP/IP protocol suite

ComputerGuru -

The Internet protocol suite commonly known as TCP/IP is a set of communications protocols used for the Internet and similar networks. TCP/IP is not a single protocol, but rather an entire family of protocols.

The network concept of protocols establishes a set of rules for each system to speak the others language in order for them to communicate. Protocols describe both the format that a message must take as well as the way in which messages are exchanged between computers.

Transmission Control Protocol (TCP) and the Internet Protocol (IP), were the first two members of the family to be defined, consider them the parents of the family. Protocol stack describes a layered set of protocols working together to provide a set of network functions. Each protocol/layer services the layer above by using the layer below.

Internet Protocol (IP)

Internet Protocol (IP) envelopes and addresses the data, enables the network to read the envelope and forward the data to its destination and defines how much data can fit in a single packet. IP is responsible for routing of packets between computers.

Internet Protocol (IP) is a connectionless, unreliable datagram protocol, which means that a session is not created before sending data. An IP packet might be lost, delivered out of sequence, duplicated, or delayed. IP does not attempt to recover from these types of errors. The acknowledgment of packets delivered and the recovery of lost packets is the responsibility of a higher-layer protocol, such as TCP.

An IP packet, also known as an IP datagram, consists of an IP header and an IP payload. The IP header contains the following fields for addressing and routing: IP header field, Source IP address of the original source of the IP datagram, and the Destination IP address of the final destination of the IP datagram.

Time-to-Live (TTL) Designates the number of network segments on which the datagram is allowed to travel before being discarded by a router. The TTL is set by the sending host and is used to prevent packets from endlessly circulating on an IP internetwork. When forwarding an IP packet, routers are required to decrease the TTL by at least 1.

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) breaks data up into packets that the network can handle efficiently, verifies that all the packets arrive at their destination, and reassembles the data. TCP is based on point-to-point communication between two network hosts. TCP receives data from programs and processes this data as a stream of bytes. Bytes are grouped into segments that TCP then numbers and sequences for delivery.

Transmission Control Protocol (TCP) is connection oriented, which means an acknowledgment (ACK) verifies that the host has received each segment of the message, reliable delivery service. Acknowledgments are sent by receiving computer, unacknowledged packets are resent. Sequence number are used with acknowledgments to track successful packet transfer

Before two TCP hosts can exchange data, they must first establish a session with each other. A TCP session is initialized through a process known as a three-way handshake. This process synchronizes sequence numbers and provides control information that is needed to establish a virtual connection between both hosts.

Once the initial three-way handshake completes, segments are sent and acknowledged in a sequential manner between both the sending and receiving hosts. A similar handshake process is used by TCP before closing a connection to verify that both hosts are finished sending and receiving all data.

TCP ports use a specific program port for delivery of data sent by using Transmission Control Protocol (TCP). TCP ports are more complex and operate differently from UDP ports.

While a UDP port operates as a single message queue and the network endpoint for UDP-based communication, the final endpoint for all TCP communication is a unique connection. Each TCP connection is uniquely identified by dual endpoints.

Comparison between the OSI and TCP/IP Models

TCP/IP Model Layer 4. Application Layer

Application layer is the top most layer of four layer TCP/IP model. Application layer is present on the top of the Transport layer. Application layer defines TCP/IP application protocols and how host programs interface with Transport layer services to use the network.

Application layer includes all the higher-level protocols:

  • DNS (Domain Naming System)
  • HTTP (Hypertext Transfer Protocol) is the protocol used to transport web pages.
  • FTP (File Transfer Protocol) used to upload and download files.
  • TFTP (Trivial File Transfer Protocol) used to upload and download files.
  • SNMP (Simple Network Management Protocol) designed to enable the analysis and troubleshooting of network hardware. For example, SNMP enables you to monitor workstations, servers, minicomputers, and mainframes, as well as connectivity devices such as bridges, routers, gateways, and wiring concentrators.
  • SMTP (Simple Mail Transfer Protocol) used for transferring email across the internet
  • DHCP (Dynamic Host Configuration Protocol) used to centrally administer the assignment of IP addresses, as well as other configuration information such as subnet masks and the address of the default gateway. When you use DHCP on a TCP/IP network, IP addresses are assigned to clients dynamically instead of manually.
  • X Windows, Telnet, SSH, RDP (Remote Desktop Protocol)

TCP/IP Model Layer 3. Transport Layer

Transport Layer is the third layer of the four layer TCP/IP model. The position of the Transport layer is between Application layer and Internet layer. The purpose of Transport layer is to permit devices on the source and destination hosts to carry on a conversation. Transport layer defines the level of service and status of the connection used when transporting data.

The main protocols included at Transport layer are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

TCP/IP Model Layer 2. Internet Layer

Internet Layer is the second layer of the four layer TCP/IP model. The position of Internet layer is between Network Access Layer and Transport layer. Internet layer pack data into data packets known as IP datagrams, which contain source and destination address (logical address or IP address) information that is used to forward the datagrams between hosts and across networks. The Internet layer is also responsible for routing of IP datagrams.

Packet switching network depends upon a connectionless internetwork layer. This layer is known as Internet layer. Its job is to allow hosts to insert packets into any network and have them to deliver independently to the destination. At the destination side data packets may appear in a different order than they were sent. It is the job of the higher layers to rearrange them in order to deliver them to proper network applications operating at the Application layer.

The main protocols included at Internet layer are IP (Internet Protocol), ICMP (Internet Control Message Protocol), ARP (Address Resolution Protocol), RARP (Reverse Address Resolution Protocol) and IGMP (Internet Group Management Protocol).

Reverse Address Resolution Protocol (RARP) adapted from the ARP protocol and provides reverse functionality. It determines a software address from a hardware (or MAC) address. A diskless workstation uses this protocol during bootup to determine its IP address.

Address Resolution Protocol (ARP) translates a host's software address to a hardware (or MAC) address (the node address that is set on the network interface card).

Internet Control Message Protocol (ICMP) enables systems on a TCP/IP network to share status and error information such as with the use of PING and TRACERT utilities.

TCP/IP Model Layer 1. Network Access Layer

Network Access Layer is the first layer of the four layer TCP/IP model. Network Access Layer defines details of how data is physically sent through the network, including how bits are electrically or optically signaled by hardware devices that interface directly with a network medium, such as coaxial cable, optical fiber, or twisted pair copper wire.

The protocols included in Network Access Layer are Ethernet, Token Ring, FDDI, X.25, Frame Relay etc.

The most popular LAN architecture among those listed above is Ethernet. Ethernet uses an Access Method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection) to access the media, when Ethernet operates in a shared media. An Access Method determines how a host will place data on the medium.

IN CSMA/CD Access Method, every host has equal access to the medium and can place data on the wire when the wire is free from network traffic. When a host wants to place data on the wire, it will check the wire to find whether another host is already using the medium. If there is traffic already in the medium, the host will wait and if there is no traffic, it will place the data in the medium. But, if two systems place data on the medium at the same instance, they will collide with each other, destroying the data. If the data is destroyed during transmission, the data will need to be retransmitted. After collision, each host will wait for a small interval of time and again the data will be retransmitted.



Subscribe to Geek History aggregator - Geek News