The Importance of Technology in Community Outreach

As an active board member for one of my local community centers, and a (brand new) member of my local neighborhood council, I have been asked to assist with helping with Community Outreach. In both organizations, Community Outreach is handled in a variety of ways, but mostly involves paper mailing lists, newspaper ads, signs, and phone calls. Being a techno-nut, the first thing I said in both cases was “Sure! How’s your website?”, and quietly awaited a response, only to find out that their websites were both essentially non-existent. Definitely need to fix that. But why? What does a website have to do with community outreach? Before I can answer that, we should probably define what Community Outreach actually is.

Community Outreach is, in a nutshell, reaching out to the people of the community to let them know about things you can do to benefit them, or their community. This could be products you offer, services, upcoming events, etc. Someone who helps with community outreach for an organization (usually a coordinator of some kind) also help out by assisting the community with getting access to these benefits, and gathering feedback about the benefits from people who have used them, so they can try to make the benefits even better.

Alright, back to the original question: what does a website have to do with community outreach? To answer this, I can simply say EVERYTHING! A website is nothing more than an information repository that also serves double-duty as a marketing agent. In this day and age, people are much likely to consult the internet to gather information long before they ever pick up a phone to talk to an actual person. That being said, a well-designed website can provide as much information in a few minutes as a human could in an hour over the phone.

In my personal experiences, the number one reason why most community-oriented events have such a low turnout is because of an honest lack of knowledge of the public; when there are two flyers taped to a pole and a three sentence article in the newspaper, the chances of everyone in a community seeing the notice is slim to nothing. Now, if the same information was posted to the organizations website (and the public were aware of the website’s existence), quite a few people would have the ability to see the notice. Throw social media into the mix, and public event could almost instantly be seen by hundreds to thousands of people!

After briefly explaining these bullet points to the two organizations I volunteer for, I had a pretty immediate buy-in, and have completed one of the websites already (pending deployment), and am getting started on the other as we speak! Stay tuned…

An Overview of the User Datagram Protocol

As a five year follow-up to my previous article, An Overview of the Transmission Control Protocol, I figured I would go ahead and crank out its companion piece to show how the User Datagram Protocol (UDP). This one is going to be much shorter; the entire RFC that defines UDP is a mere 1.5 pages in length.

A brief history: The User Datagram Protocol (UDP) was created by David P.Reed in 1980 to run over Internet Protocol (IP) packet-switched networks. UDP is a connectionless protocol, and does not require a connection to be established to a destination prior to transmitting data. It is stateless, and best used for unidirectional and simple query-response data transactions.

An Overview of the Transmission Control Protocol

The Transmission Control Protocol (TCP for short) was published in January of 1980 as the first protocol designed to work with the Internet Protocol (IP) defined in RFC 791 (later STD 5) for packet-switched networks. It is not uncommon to see TCP and IP mentioned synonymously, as they are designed to work side-by-side within a network. The Transmission Control Protocol was developed by a team of engineers at the Information Sciences Institute from the University of Southern California for the Defense Advanced Research Projects Agency (DARPA) in response to their need for a data transmission protocol that would ensure network reliability and availability. In September of 1981, the 8th and final version of TCP was published as RFC 793, which later became known as STD 7. While TCP has had a few extensions added over the years, the protocol has remained essentially unchanged since it was first published as STD 7.

The Transmission Control Protocol was designed to operate at Layer 4 in the OSI network model, right on top of IP at Layer 3. As its name may indicate, TCP is designed for controlling data transmissions between devices connected to an IP network. While IP is responsible for delivering data from one node to another in a generic fashion, TCP is responsible for making sure that the data that IP has to send is properly broken into packets, sent in the right order, and received without error on the other side of the connection. TCP accomplishes this by providing an array of control and validation mechanisms for network devices to use to ensure that data is transmitted correctly from device to device as efficiently as possible.

By using TCP with IP, network engineers are able to implement a connection oriented protocol, meaning that once a connection is opened to a remote network device, it will remain open until both the client and the server have finished sending data, and agree to close. What does this mean for the average user? While the average user will not ever know about TCP and what it does, every user would understand how important it is if it was unavailable. TCP provides the core reliability and quality of service features available over packet switched networks. Most of these features are not available in other publicly available Transport-layer specifications, which are generally much more susceptible to lost and corrupt data during transmissions.

Losing transmitted data en route is a relatively common occurrence in most networks, especially when the data has to travel through multiple devices or across great distances. When using TCP, the sending and receiving devices have access to specific features of the TCP protocol that allow for lost and corrupt data packets to be detected, and later retransmitted to correct the problem. While the specifics of the flow and congestion control mechanisms are beyond the scope of this document, it should be noted that the culmination of mechanisms and algorithms lend themselves to “error-free delivery.” From an application and user’s point of view, this means that 100% of the data sent across a network will be delivered to its destination.

Anyone who has ever attempted to read RFC 793 will tell you two things: The reading is hard, and the protocol is easy. “Request for Comments” documents are publicly available for use and download, and are maintained by the Internet Engineering Task Force (IETF), the governing agency for higher-lever protocols and specifications that are not defined by the Institute of Electrical and Electronic Engineers (IEEE), which handles lower/physical level specifications. The simplicity of the TCP protocol’s data structure can be hard to decipher from the 85-page RFC, however it can be summarized as a single data structure (known as a “TCP Header”) as represented below (written in C):

This header goes inside of an IP header (which contains the
source\destination IP address, along with a few other fields.)
typedef struct {
unsigned short src_port; // Source port, 16 bits
unsigned short dest_port; // Destination port, 16 bits
unsigned long seq_num; // Sequence #, 32 bits
unsigned long ack_num; // Acknowledgment #, 32 bits
unsigned short flags; // 4 Data offset bits, 6 reserved bits,
/* and the following 6 flag bits (in order):
Bit 0 - Means that the "urg_ptr" field below is used
Bit 1- Means that the "ack_num" field above is used in this packet
Bit 2 - Triggers the Push function
Bit 3 - Reset the connection
Bit 4 – "Sync" (SYN) - Synchronize sequence numbers
Bit 5 - "Finish" (FIN), means that the sender is out of data to send
unsigned short window; // The size of the transmission window
unsigned short checksum; // The packet's checksum
unsigned short urg_ptr; // This packet is Urgent!!! (Shouldn’t use)
unsigned short[] opts; // Header Options
unsigned short[] pad; // Padding
// The packet’s actual data would come next

Each field represents a piece of information that the Transmission Control Protocol requires to operate properly while sending and receiving data. These fields are described as follows:

Source Port

The port that the TCP packet is originating from (i.e. 80=HTTP, 21=FTP…).

Destination Port

The port that the TCP packet is destined to on the remote device.

Sequence Number

The sequence number of this data packet. If the SYN flag is set, the sequence number is the initial sequence number, which can be predetermined or (preferably) randomly generated and agreed upon.

Acknowledge Number

If the “ack_num” flag in the TCP_HEADER_ is set, then this field is the next sequence number the sender is expecting to receive. Once a connection is opened, this is always sent.

Data Offset

This field represents the size of the TCP packet’s header, not including the size of the data it contains. This also represents the position in the data stream where the packet’s data starts.


The TCP specification initially left 6 bits as reserved and unused, to be allocated later for extensions/customization of the TCP protocol.

Control Flags

See comments in TCP_HEADER_ structure above

Window Size

The number of bytes the sender can accept (defaults to 1,460 bytes).

Packet Checksum

STD 7 initially defined the checksum to be a 16-bit value that was calculated for every TCP packet by adding up all of the bytes and taking the ones-complement of the sum (a binary bit transformation operation). RFC 1146 later extended TCP to also allow for the use of 8-bit and 16-bit Fletcher’s algorithm for calculating checksums.

Urgent Pointer

The urgent pointer is a 16-bit value that references data in the TCP packet that has been deemed “urgent”, in lesser words. It has been noted that this field is often incorrectly implemented and/or broken, and is rarely used.


TCP allows a variety of options to be transmitted at the end of the TCP header (before the packet data). These options are stored in a Tuple (a data structure that represents a value’s type, length, and value).


At the end of the TCP header, there is a block of padding bits that is used to add extra data to the header so that it is “aligned on an octet boundary,” meaning that the size of the packet’s header is a multiple of 8 bits.


The Transmission Control Protocol puts one (or more) of these headers on every scrap of data it receives from higher layers in the network stack before sending it down to the IP stack for distribution to the next hop device. Using these headers, the Transmission Control Protocol knows everything it needs to reassemble the data when the client receives it. Of course, the TCP client and server must first perform a few steps to open a connection between them. First, both the client and server will need to open up an Internet Socket in either passive or active mode. A socket is a mechanism for binding an application to a specific data stream associated with a network interface card, represented as an IP address and port number. Whichever device is playing the role of the server will be required to open their socket in passive mode, which tells the system to open the socket and wait for incoming connections. Conversely, the client device will need to open their socket in active mode and point the destination to the server device’s IP after the server has opened its socket. At this time, the physical connection will be established, and TCP will negotiate the connection.

The Transmission Control Protocol uses a Three-Way Handshake exchange to initiate every new connection. When a client connects to the server’s socket, the first packet it sends will have the SYN flag set, which will signal the server to synchronize sequence numbers with the client. When the synchronization is complete, the connection has been established and is ready for use. As TCP is a connection-oriented protocol, both the client and server may send and receive data at any time, however TCP leaves the communication synchronization for higher level drivers to manage. Regardless of which device is transmitting, both devices will follow the same basic process.

When data is sent from an application to the socket, the TCP driver will check the data for a variety of metrics, including size, options, and checksums. If the data is too large to fit inside of a single TCP header (which is often the case), the protocol will use its built-in data fragmentation techniques to distribute the data across multiple headers, updating the sequence numbers accordingly. Once the packets have been constructed from the data passed in, they are sent to the IP driver, where they are routed throughout the network. When the data reaches the destination device, the sending process will be reversed to decode the packets back into the original data that was sent. In the event that some data was lost or corrupt, the protocol will request the missing packets again, where they will be re-inserted into the received data stream upon successful arrival.

This process continues back and forth between the client and server for the lifetime of the TCP session. When both the client and server have decided that they have no more data to send, the connection termination process is initiated. This process uses the same Three-Way Handshake as the connection establishment, but sets the FIN (Finish) flag rather than the SYN (Sync) flag to denote the end of the connection. Once the server and the client have both acknowledged the Finish flag, the sockets can be closed and the communication session is complete.

While there are dozens more pages that could be written on the inner workings of the protocol, it detracts from the core point of TCP: simplicity. When the Transmission Control Protocol was drafted in the early 1980’s, system resources were very limited, so the protocol was designed to be both small and simple, yet powerfully attractive for its reliable connection and scalability across an enormous collection of devices of all varieties. Today, TCP (along with IP) has become a “must-have” technology for every device wanting to connect to a public network, and is used as the de-facto backbone of the Internet as we know it.

While the Transmission Control Protocol is an excellent choice for almost every application, there are still a few instances when it may not be the most appropriate choice for the data being sent. Network devices that are required to send large amounts of data over short distances tend to run quite a bit slower over a TCP connection than a UDP (User Datagram Protocol) connection, which does not provide reliability or guaranteed delivery to reduce the protocol header’s size, increasing usable bandwidth. Network game and FTP servers are excellent examples of devices that often benefit from using alternative protocols for data transmission.

Even the most popular underlying protocols in the world contain flaws, and TCP is no different. Since its inception in the 1980’s, the Transmission Control Protocol has been susceptible to a variety of attack techniques that exploit flaws in the protocol’s design, generally known as Denial of Service (DoS) attacks, thus named because the aftermath of the attacks is a denial of the target’s services to legitimate network users. These attacks usually require that the aggressor has some knowledge about the target network, including legitimate IP addresses, and a rough idea of the network’s design. By using a collection of tools and utilities, an attacker could craft malicious TCP packets and inject them into a real data connection to masquerade as a legitimate network device, either to interrupt services or gain access to a private network. While many solutions and workarounds have been proposed and implemented to reduce the commonality of these vulnerabilities, the problem is deeply integrated into the core structure of the protocol, and will therefore always be a threat.

To conclude, we have learned that the Transmission Control Protocol, albeit imperfect, is one of the essential components of a reliable packet switched Internet Protocol network. The simplicity of its design lends itself to its speed and efficiency by minimizing the excess overhead associated with other transport layer protocols, and the robustness of the protocol’s features ensures it provides a consistent, reliable user experience across networks of any size. After nearly 25 years of faithful public service, the Transmission Control Protocol has stood the test of time as one of the true core components of the Internet as we know it, and is expected to be widely implemented and used for decades to come.


Braden, R. and V. Jacobson. “RFC 1072.” IETF Data Tracker. IETF, Oct 1988. Web. 28 Oct 2012. <>.

Gilbert, Howard. “Introduction To TCP/IP.” PC Lube and Tune. Yale University, 02 1995. Web. 28 Oct 2012. <>.

“Internet Protocol Suite.” Internet Protocol Suite. 2012. <>.

Kessler, Gary. “An Overview of TCP/IP Protocols and the Internet.” Gary Kessler Associates. N.p., 09 2010. Web. 28 Oct 2012. <>.

Partridge, A. and J. Zweig. “RFC 1146.” IETF Data Tracker. IETF, Mar 1990 Web. 28 Oct 2012. <>.

Parziale, Lydia. TCP/IP Tutorial and Technical Overview. 8th ed. IBM Corp., 2006. 149-170. eBook. <>.

Postel, Jon, ed. “RFC 793.” IETF Data Tracker. IETF, Sept 1981. Web. 28 Oct 2012. <>.

Rouse, Margaret. “TCP.” Search Networking. N.p., 01 2006. Web. 28 Oct 2012. <>.

Rouse, Margaret. “TCP/IP.” Search Networking. N.p., 01 2008. Web. 28 Oct 2012. <>.

“TCP/IP Suite.” N.p.. Web. 28 Oct 2012. <

“Transmission Control Protocol.” Wikipedia. 2012. <>.


Another research paper from the beginning of 2011…

In the early 1980’s, networking technology was becoming more widespread, and the need for higher security for large corporate networks was becoming greater. Though the specific year is under dispute, the first firewall was created near the end of the decade. Thirty years later, the modern firewall has evolved into a sophisticated protection device for personal and corporate networks worldwide. Every day, these tailor-made software applications and devices fend off hundreds to thousands of attacks from outside users, acting as a “front gate” to filter out suspicious activity.

A firewall is a system that is designed to prevent unauthorized access to a private network, and is usually considered the first line of defense in a secure network. In most cases, these unauthorized access attempts originate from outside the private network. However, attacks from inside the network are also possible, in which case a firewall can help reduce the attack surface by segmenting the intranet, possibly slowing down the intruder. Firewalls work by examining every unit of data that enters or leaves the network, and matching it against a rule set that determines if the data meets specific requirements to be allowed through the firewall to it’s destination.

Firewall solutions can fall into one of two broad categories: hardware and software. A hardware firewall is a physical device that can be strategically placed in the network based on the filtering rules to be applied to the traffic across a specific network segment. These physical devices can be anything from a proprietary “firewall in a box,” which resembles an inline repeater, to a dedicated PC with a stripped down operating system running a firewall software solution. Hardware firewalls are generally placed at the beginning of the internal network, between the building’s point of presence and the first LAN device on the network. This allows the device to filter all incoming and outgoing data for the entire network. Software firewalls perform the exact same tasks as hardware solutions, but run as a software application or service on end devices, such as workstations and servers. These software solutions are optimally designed to protect a single device, but can also be used in the same manner as a hardware device to filter all traffic for a network.

Firewalls can also be sub categorized based on how they operate. At this time, there are six major operation roles that a firewall can fulfill:

  • Packet Filter: The firewall will analyze every packet it receives and matches it against a rule set If the packet meets the rule set’s criteria, the packet will be forwarded to it’s destination.
  • Protocol Filter: The Firewall will filter traffic based on the protocol that is being used for transmission. For example, this could allow the firewall to block all UDP traffic, while allowing TCP traffic on ports 0-1024.
  • Proxy Server: Proxy servers use Network Address Translation (NAT) to effectively hide a private network’s public IP address from the work by altering the IP address in each packet to make it appear to have a different address, and routes inbound traffic destined to the proxy address to the private network.
  • Application gateway: Filter traffic based on the applications and services running on the network, such as telnet and FTP.
  • Circuit-level Gateway: This style of firewall can filter traffic at the Data-Link layer when a connection is established over a network segment. After the connection is established, the firewall will allow all traffic to flow across the segment.

Note that firewall roles, specifically packet and protocol filters, can run in one of two modes: stateful, where the firewall can determine the state of the connection and packet order, and stateless, where each packet is inspected without any knowledge of the connection’s status or other packets sent and received. These roles are often combined within a single firewall solution to increase the protective qualities of the product. By working together, each role can be used to help prevent different methods of attack.

Now that we understand how a firewall filters traffic, we can discuss what they are designed to defend against. Every day, millions of networks across the world are penetrated by a variety of attacks. While there are hundreds of reasons, oftentimes the motivating factor for these attacks is for profit, be it monetary or information gain. Sometimes, it can be as simple as a disgruntled employee “getting revenge” on their employers. Regardless of the reason, the purpose of a firewall is to reduce the attack surface, attempt to prevent intrusions, and to monitor and log any attempts to break into the network.

There are dozens of methods a “Hacker” (a person who exploits vulnerabilities in a system or application to gain entry) can use to penetrate a network. Some of the more common methods include the utilization of Virus’, Trojan Horses, Worms, Root kits, and Scanners to gain access to an internal network. Hackers can also exploit vulnerabilities in applications, network protocols, and even hardware to gain access. After an intruder has gained access to a network, they have the potential to wreak havoc by gaining administrative privileges, which can be used to steal and destroy information, vandalize websites, deny services to legitimate users, and even destroy critical hardware. In most cases, these intruders also open up more holes in the network perimeter so they can return at a later time.

Firewalls are designed to allow and deny, as well as monitor all incoming and outgoing connections that they are responsible for. A firewall will usually block all unused and disabled ports by default, reducing the attack surface substantially. For active ports, the firewall can be configured to either filter traffic based on a rule set, or ignore the port and allow all traffic. The same method is used for building application and system service connection rules. After all ports are configured, the firewall begins monitoring and filtering all connections.

When an attempt to break into a network or device is detected, most firewalls will immediately begin logging all activity that is originating from the suspicious IP (if they are not already logging network connections). Some of the more advanced firewalls are able to immediately notify the network administrator via email, phone, or text message when an intrusion attempt is detected. Some of the attacks that can trigger a firewall’s intrusion alarm include:

  • Eavesdropping: The first step a hacker usually takes to enter a secured network is to gather information about his target. A hacker can use a variety of tools to accomplish this, such as key loggers, protocol analyzers, and even Social Engineering to gain user names, passwords, and other information about the network.
  • Unauthorized Network Access: When an unauthorized user connects to and gains access to a network service that they do not have permission to use. This can be caused by a lack of, misconfigured, or insufficient user privileges implemented by the network administrator. Even when permissions are correctly configured, attackers can get around them using alternate methods.
  • Exploiting Security Vulnerabilities: Many applications and services implemented in today’s networks are full of security holes. Buffer overflows and under runs are a good example of software vulnerabilities, which represent the majority of “bugs” in most software. By injecting the right sequence of data into an unsecured application, an attacker can gain access to system applications and services, thus having the ability to take over the machine.
  • Spoofing: This is a complex attack that usually requires packet capturing and injection software to craft and send fake IP packets. By capturing packets from a live connection to a private network, a hacker can craft packets to match the connection’s parameters, inject them, and eventually hijack the connection.
  • Denial of Service (DOS): Using a bot net and specialized applications, a hacker can deny service to a private network by having hundreds to thousands of “zombie” computers all attempt to connect to the network to overload services. This leads to a denial of network services to legitimate users.

Firewalls have played a vital role in network security for over three decades, acting as the gatekeeper for the connections it manages. By filtering traffic, securing ports, and monitoring and logging connections, a well-configured firewall can successfully prevent most attempts to penetrate a network. A firewall, in conjunction with other common network security measures, make up the foundation of a well-formed security plan to help ensure network security. While there are always going to be unpreventable Zero-Day attacks and missed security bugs in software, firewalls will continue to stand at the front lines to defend networks for years to come.