Mail Routing Fundamentals

Mail Routing Fundamentals

E-mail can be accessed multiple ways from the e-mail program installed on your desktop to the web application you use when you are traveling.  Even though these methods of accessing e-mail have a different interface their method for sending e-mail is very similar. The process of directing a message to the recipient’s host is called routing. Apart from finding a path from the sending site to the destination, it involves error checking as well as potentially virus and spam filtering. There are many processes, systems and protocols involved in the successful delivery of one e-mail message. Sometimes these messages arrive at their destination almost immediately, but sometimes they may be delayed or even rejected somewhere in the process. We are going to discuss the basic process for e-mail delivery and look at a few reasons that messages can be delayed or rejected along the way.

When you click the “Send” button your e-mail client initiates a two-way conversation with your e-mail provider’s SMTP (Simple E-mail Transfer Protocol) servers over standard SMTP ports – TCP ports 25 and 587.  SMTP servers only understand very basic commands, so your mail client must initiate this connection using only commands that the SMTP server can interpret. The most common SMTP commands are:

SMTP Command, Description
HELO / EHLO, This command is used to identify the sender (client) to the SMTP server. EHELO is a newer command than HELO and includes more functionality\, however most servers still accept either command as an introduction.
MAIL FROM, Specifies the e-mail address of the sender.
RCPT TO, Specifies the e-mail address of the recipient.
DATA, Starts the transfer of the actual data (body text\, attachments etc).
RSET (RESET), Specifies that the current mail transaction will be aborted.
VRFY (VERIFY), Asks the receiver to confirm that the argument identifies a user or mailbox.
HELP, This command causes the server to send helpful information to the client.
QUIT, Quits the session.
AUTH LOGIN, Initiates SMTP Authentication when sending

The e-mail client sends these commands to the mail server and the server replies with numerical codes and additional information after the numerical code. These reply codes tell the client if the previously sent command succeeded or failed.  Commands and replies are composed of characters from the ASCII character set. EHELO is a newer command than HELO and includes more functionality, however most servers still accept either command as an introduction.

Once the server connection is initiated it will then authenticate your username and password. SMTP authentication is initiated with the AUTH LOGIN command. The server will then responds requesting your username, 334 VXNlcm5hbWU6, which is the phrase Username: in base64 encoding. You then send your base64 encoded username, the server then responds with 334 UGFzc3dvcmQ6 which is Password: in base64 encoding. You then send your base64 encoded password and should receive a 235 Authentication Succeeded message. If the e-mail server accepts your message it will proceed to the MAIL FROM command. This process is done inside your mail client, however with some know-how and command prompt this can be done manually as well.

The output of a telnet session will look something like this:

EHLO Hello []
250-SIZE 52428800
250 HELP
334 VXNlcm5hbWU6dXNlcm5hbWUuY29t
334 UGFzc3dvcmQ6bXlwYXNzd29yZA==

235 Authentication succeeded

You will know if your message is rejected by your SMTP server and why by looking at the undeliverable message that you receive back at the e-mail account you sent the message from. An undeliverable message will look something like this though the information and formatting will vary based on the provider. It is also possible that the message will be rejected before ever reaching your SMTP server, which will appear in your browser as a pop-up notification showing the error. (see error section below)

If you are using an outside company as a smart host the rejections that you will see should be checked for the rejecting server which could now be your own internal mail server, the smart host’s server or the recipient’s servers.

Every domain that is used for e-mail, for example, needs to have an MX-record (Mail Exchanger Record) in order to send and receive e-mail. An MX-record is a DNS-entry that tells the sending SMTP server where to deliver the e-mail for a domain. Your MX record will be determined by where your mail is hosted, each company has a different MX record, larger companies will have several.  An MX record looks like this:



In our example the hostname tells querying SMTP servers that e-mail addressed to someone should be delivered to the server ‘’.  This requires that ‘’ exists (either as a CNAME or an A-record) in the authoritative zone.

There are two numeric values in the above record. The first one is 3600, which is the TTL (Time to Live). This record tells other DNS-servers that it is OK to cache the above record for up to 3600 seconds (or one hour). The second numerical value is ’10’ and is the MX-record priority. In this example, it doesn’t matter, as we only have one record, but if we were to have multiple MX records, it would determine the priority order of the servers. If the first one is unresponsive, the second one will be used in order of priority with the lowest priority being used first (example: a record with the priority of 10 would be used before the record with a priority of 30). In a server with multiple MX records you would see this:


As you can see, there are multiple servers listed for the same domain ( They all have the same TTL, 14400 seconds (four hours). The difference is the priority and the server. The sending server would try to deliver e-mails to the servers in the order from lowest priority value to highest until it finds one server who accepts the connection. This means if the server ‘’ goes down or is too busy to accept the connection the message will then try to deliver to ‘’ and ‘’ since these two servers have the same priority they will be chosen at random. In the case of large providers with multiple domains relaying on their servers for SMTP they are likely using a clustered mail environment which means many servers with different IP addresses can send mail from the same host, such as ‘’.

If you want to know how your mail server might determine the DNS record read my article on DNS resolution here:

Once your SMTP server has determined the server for the mail to be sent to it will then attempt to make a connection with that server using the same basic commands as your mail client used to connect to your SMTP server. The message will first pass through the mail provider’s content filtering servers before even making it to their SMTP server where it may be rejected. There are a few reasons that mail can be rejected or even delayed by a provider’s content filters. First, if a content filtering server deems your mail content to be junk mail it will likely tag the message as junk and deliver it according to the domains preferred settings, most domains choose to have spam messages delivered to a specific “Junk Mail” folder.

If you are on a blockhole list you may receive undeliverable email notifications stating that the mail server you use is on a blackhole list. The first course of action to get removed from the blackhole list is to resolve any outstanding issues with your mail server. If this mail server is provided by an outside company or managed by your IT department you want to send them the bounceback you received so they can address the issue.  If you manage this mail server you can check some of the major blacklists for more information and to see if your server is listed in their database.

Once the server has determined that your mail is authorized and not on a blackhole list it will attempt to deliver this message to the recipients’ mailbox. The mail server will check to make sure that the recipient address specified does exist as a mailbox on the server. As long it does exist, the message will be delivered to the recipient’s mailbox for their mail client to retrieve. However, the message can again be rejected at this point. For example if the recipient email address is spelled incorrectly, has been deleted or even if the users’ mailbox is full you may receive a bounceback stating why your message was rejected.

Alternate delivery methods and setting up your own mail server

A smart host is an outside mail server that accepts inbound SMTP and routes it out again however the point of sending your mail to one of these is that you may offload the task of routing and retrying mail to another location. Directing your outbound traffic to an SMTP host at your ISP will allow your outgoing internet connection to free up much quicker than sending through your own network. A relay SMTP server is usually the target of a DNS MX record that designates it, rather than the final delivery system.  The relay server may accept or reject the task of relaying the mail in the same way it accepts or rejects mail for a local user.  If it accepts the task, it then becomes an SMTP client, establishes a transmission channel to the next SMTP server specified in the DNS and sends it the mail.  If it declines to relay the message a 550 error should be returned to the sender.

Common Rejections

From: Mail Delivery System <>
To: <>
Subject: Undelivered Mail Returned to Sender
file-2 , 416 Bytes
file-3 , 5.9 KBytes
file-4 , 16.2 KBytes
This is the mail system at host

I’m sorry to have to inform you that your message could not
be delivered to one or more recipients. It’s attached below.

For further assistance, please send mail to postmaster.

If you do so, please include this problem report. You can
delete your own text from the attached returned message.

                   The mail system

<>: host[] said: 554
5.7.1 <>: Relay access denied (in reply to RCPT TO

From this delivery failure notice you can find the following information

1. The name of the server that is rejecting your message (this message can either be rejected by your server or by the recipients’ mail server):  This is the mail system at host, in this case the message was rejected by

2. The cause of the rejection: said: 554 5.7.1 <>: Relay access denied (in reply to RCPT TO command, this particular rejection is rejecting relay access which means you do not have permission to send your mail through this mail server.

SMTP errors from a Mail client

No Relaying Allowed: This rejection message tells you that the credentials (username and password) that you are sending to the e-mail server are not authenticating. You should check the settings in your e-mail client to ensure your username and password fields are not blank, the correct credentials are entered and that you have outgoing server authentication enabled.

<>: host[] said: 554

    5.7.1 <>: Relay access denied (in reply to RCPT TO


Could not connect to SMTP server: This rejection tells you that the connection from your e-mail client to your e-mail providers’ server failed. You should check the SMTP server name and ensure that your SSL types match the port numbers specified and that you have no firewall or antivirus program that is blocking outbound connections.


Greylisting is a way mail providers attempt to limit the amount of spam that gets through. Grey listing checks the legitimacy of an e-mail message by temporarily rejecting the incoming message with a 451 error.  The sending SMTP server will recognize this as a temporary rejection and attempt to re-send the message. If there is a prior relationship between the sending server and the recipient server, the message will get delivered without any issue. At the same time the 451 error is sent a triplet is recorded in the Greylisting database as an unconfirmed triplet.

A triplet consists of:

  • IP address of the sending server
  • Sender e-mail address
  • Recipient e-mail address

This is where delays come in, this response can take up to 24 hours depending on the settings of the server and how it is set to respond. If you have a small one server mail setup then the greylisting process generally doesn’t cause a significant delay. However, in a clustered mail environment where many different servers, with different IP addresses, can be used to send and receive mail the delays can be significant. If the rejected message is re-sent and all information matches the prior unconfirmed triplet then the message will pass through greylisting and be delivered. If any of the three items do not match then it will be seen as a new message and will begin the greylisting process again.

RBLs (Realtime Black Hole Lists)

There are specific companies who run blackhole lists and those lists are used by mail providers to determine when a server’s traffic has become unfavorable as a way to protect them from accepting unfavorable traffic. It’s common for a server’s mail IP address to temporarily end up on a public blackhole list. It could be because of the overall volume of mail that is coming from that server, messages that seem to have characteristics of junk in them, or that list has received messages to spam traps it operates.

How can you identify if your mail server is blackholed?

Spamhaus (

MXtoolBox (

Sorbs (

BGP, ASICs, and IPv4 at scale – Why the Internet is (or was) Broken

BGP, ASICs, and IPv4 at scale – Why the Internet is (or was) Broken

On Tuesday August 12, 2014, the global IPv4 Internet routing table exceeded 512,000 routes. This is an important number for several very common routing platforms across multiple networking vendors, and as a result many networks across the globe experienced interruptions and performance issues. While some articles have reported on the scope of impacted carriers and services, I’ll discuss the technologies and limitations that led to this failure.

To understand the source of this failure, there are two critical pieces of Internet technology to grasp. First is BGP, the way routers around the Internet exchange information about what networks they know about, and how to reach them. Second, and the more immediate cause of the recent failure, are the mechanics or implementation of routing and destination lookups in high performance, carrier grade routers and switches.

Border Gateway Protocol

The common language that all carrier and ISP routers speak throughout the Internet is crucial to the delivery of the interconnected, distributed network that we all use today. BGP dates back to 1995 and even earlier in older iterations, and is still in use today, albeit with many adjustments and new features. At the core of BGP is a common method for routers and network devices to tell each other about networks, and their relationship to those networks. A critical component to BGP operation is the Autonomous System or ASN, which defines a network and its associated prefixes (known routes) as viewed from the perspective of the BGP protocol.


BGP speaking routers across autonomous systems peer with each other in a variety of arrangements such as dedicated peering points or direct connections, and exchange their known routes or a subset of known routes with each other. This exchange also includes a large number of attributes (Origin, As_Path, Next_Hop, Local_Pref, communities…) associated with each prefix. As you might expect, at a global scale this table is quite large – and each time a new autonomous system (typically an ISP or larger enterprise) pops up, or an existing autonomous system further distributes its network, this table continues to grow.



Routers in larger scale networks need to store, reference, and update this large global BGP routing table quickly and frequently. So where does all of this information get stored? The answer isn’t simple, and varies wildly by vendor and hardware, but in large scale network hardware, the general implementation looks like this.


Routers handling BGP (and other routing protocols) maintain routing tables and store known routes in a Routing Information Base or RIB specific to that protocol. The best known route for a destination is then inserted into the Forwarding Information Base or FIB, which the router will use for forwarding lookups on packets due for processing. While the RIB is typically stored in general system memory, of which space is often readily available, the FIB is stored on TCAM residing on ASICs. Application Specific Integrated Circuits are the secret sauce to high performing networking appliances, enabling fast routing and switching above and beyond what traditional x86 based processing and memory can achieve. The ASICs on modern routers and switches contain a fixed amount of TCAM, enabling the low latency lookups and updates required for routing and switching.

So what happened and what does it have to do with TCAM? Going back to the original issue with our picture of routing tables and TCAMs in mind, you can see how the growing IPv4 Internet might be a problem for platforms with a fixed amount of space available. As it so happens, some of the most commonly deployed BGP speaking routers and switches over the last decade, including the Cisco 6500 and 7600 platform, have had default TCAM allocations allowing for a maximum of 512,000 IPv4 entries. On many platforms, as soon as TCAM space reaches its limit, failures can occur ranging from reduced performance to catastrophic failure. On Tuesday that 512,000 IPv4 limit was reached and the straw that broke the camel’s back, so to speak, was Verizon owned ASN 701 & 705. While the change, the apparently inadvertent advertisement of thousands of new routes, was quickly reversed, routers across the Internet saw performance issues and failures for hours due to the nature of BGP propagation.


Even though we are again under the 512k IPv4 route marker, it’s only a matter of time until we cross that barrier and beyond. This expansion of IPv4 tables is ultimately at the fault of no one, and should only be seen as the expected progression and growth of the “Internet of Things”. It’s hard to predict what type of scale future networks will need to adapt to, and in a world of fast expanding virtualization and cloud services it can be easy to forget that the underlying network transport behind these services must scale to cope with demand. Hold on to your hats over the next few weeks as we inevitably cross the 512k barrier again, and see who accepted this as a wake-up call to upgrade where necessary, and who didn’t.

How can I reset my e-mail password using Connect Exchange?

Changing the password for you e-mail account with Connect Exchange is very simple. 

First, log into your email account using the web portal at

After you have logged in you will choose the ‘settings’ icon in the top right hand corner of the screen and choose ‘change password’


This will open the password reset screen where you will enter your current email password, followed by your new password twice.


Simple Desktop Virtualization: Building a Robust Environment Using Microsoft Remote Desktop Services

There have been a number of new features introduced to Remote Desktop Services in Windows Server 2012 that make the deployment of desktop virtualization an attractive option for businesses looking to enhance the experience of their desktop users.  This post will briefly review the concept of desktop virtualization and the two primary deployment models but will focus primarily on session-based desktop virtualization architecture.  This post is intended for administrators or engineers considering desktop virtualization deployment options.

Desktop virtualization allows you to abstract the desktop and application environment from the endpoint device, centrally manage and deploy applications and offer flexibility to your end-users. There are two primary deployment models: session-based and virtual machine-based desktop virtualization. Regardless of which deployment model (or combination) you choose, you can realize a number of benefits. Users can securely access their corporate desktop and applications from a thin-client, from a tablet in a coffee shop or from their computer at home.

Instead of deploying full desktop computers which require you to maintain support agreements and pay for OS and application licenses, you can simply deploy thin client devices with no moving parts to each desk. By providing a standardized desktop environment and removing hardware support from the equation, you also reduce support overhead. Centralized OS and application management improves security and reduces the risk of malware infections or other compromises due to misconfiguration.

Provisioning a new desktop environment becomes a simple task of setting up a new user and placing a thin client device on their desk. Centralizing the desktop environment in the data center, or the cloud, also allows you to reduce licensing costs as well as deploy or remove resources based on user-demand.

RDS Role Services

The Remote Desktop Services server role in Windows Server 2012 is comprised of multiple role services that can be deployed together for small deployments or spread across multiple servers for fault-tolerance and scalability.  The six available roles services are described below (some are supplemental or not required for the deployment of session-based desktop virtualization):

Session Host: This is one of the core role services and allows a server to host session-based virtual desktops or RemoteApp applications (more on this later) – this role service was formerly known as terminal services in legacy versions of Windows Server.

Virtualization Host: This is another core role service that allows a server to host virtual machine-based virtual desktops by integrating Remote Desktop Services with the Hyper-V server role.  This is the counterpart role service to the session host role service and is required when you wish to support user connections to either pooled virtual desktops or personal virtual desktops.

Licensing: This is a core role service that is required in all deployments and allows administrators to centrally manage the licenses required for users to connect to session host or virtualization host servers.  This role can often be placed on a host running additional Remote Desktop Services role services due to its lightweight nature.

Connection Broker: This role service is critical to enterprise deployments because it allows administrators to build high-availability by load-balancing incoming connection requests across available session host or virtualization host servers. The connection broker role service keeps track of sessions and routes connection requests to available hosts based on host load – in the case of existing connections, the connection broker will re-connect the user to their existing session. In Windows Server 2012, this role service now supports Active/Active operation allowing you to meet scalability requirements in addition to fault-tolerance in the connection broker layer of the infrastructure.

Gateway: This is a supplemental role service that allows users to securely connect to their session-based desktops, virtual machine-based desktops or RemoteApp published applications on an internal corporate network from any device.

Web Access: Another supplemental role service that allows users to securely connect to their session-based desktops, virtual machine-based desktops or RemoteApp published applications through a web browser.

VDI Enabling RDS Features

RemoteFX: RemoteFX provides end-user experience enhancements to the Remote Desktop Protocol that allow RDP to operate as a truly viable desktop virtualization platform.  A few of the RemoteFX components that provide critical capabilities when it comes to desktop virtualization are full USB redirection support for all desktop virtualization workloads, vGPU hardware acceleration (the ability to use a physical GPU in Hyper-V hosts to accelerate host-side rendering of graphics intensive content), intelligent transport enhancements that introduce UDP support to allow fuller/smoother desktop experiences over low bandwidth/high-latency links and finally multi-touch capabilities to support gestures (e.g. pinch, zoom, swipe).

User Profile Disks: The introduction of User Profile Disks greatly enhances the ability for users to personalize pooled virtual machines removing the necessity to implement more complex roaming profile or folder redirection methodologies. This technology is really only applicable to virtual machine-based deployments but is worth mentioning as it makes deployment of virtual machine-based VDI much simpler and more robust.

Fairshare Resource Management: New to Windows Server 2012, the session host role service now introduces “fair share” resource management for network I/O, storage I/O and CPU.  This resource management ensures that no single user can adversely impact users on the same host by ensuring each session is allocated an equal share of the resources described above.  This simple enhancement makes the user experience in session-based deployments much smoother and consistent and eases the burden on administrators when deploying additional RD Session Host servers to the session collection.

Deployment Models

Now that we’ve gone through a brief review of Remote Desktop Services and its various role services, let’s focus on examining the session-based deployment model. First and foremost, let’s address why the session-based deployment model is so attractive when compared to the virtual machine-based deployment model.

The “traditional” approach to VDI is to utilize virtual machine-based desktop virtualization – when using Microsoft Remote Desktop Services this generally means one or more RD Session Broker instances and a number of dedicated Hyper-V hosts running the RD Virtualization Host RDS role service.  In this model, users connect to a virtual machine either from a pool of identically configured virtual machines (called pooled virtual desktops) or a personal virtual desktop that is dedicated to a specific user. The drawback is that this model requires standing up dedicated virtual machines per user which requires significantly greater resources than a session-based deployment.  In addition to the increased resource demands, administrators must ensure there are adequate pooled virtual desktops or personal virtual desktops in place to meet user load requirements. You should consider virtual machine-based deployment in those cases where users require significant control over their desktop environment (e.g. administrative rights, installing applications, etc.).

In the session-based model, we continue to rely on session broker instances to load-balance incoming connections but replace the dedicated Hyper-V hosts running the RD Virtualization Host role service with servers running the RD Session Host role service. In this model, users connect to a centralized installation of a desktop running on one or more session host servers (when session hosts are configured in a farm, it is referred to as a session collection) only receiving the user interface from the session host server and sending input commands to the server. When the RD Session Host role service is installed, fairshare resource management ensures storage I/O, network I/O and CPU are evenly distributed to all users; in addition, Windows automatically configures processor scheduling to prioritize performance of user applications over background processes. This is a good reason to avoid collocating this role service on a server with “traditional” background type services (e.g. IIS, Exchange, etc.). In addition to accessing full virtual desktop environments, applications can be published as RemoteApp programs from the session host servers allowing users to connect to specific applications over Remote Desktop – these applications are rendered in a local window on the client and from the end-user perspective, behave almost identically to local applications.

Sample Session-Based Deployment Architectures

For testing or starting a small session-based VDI deployment, you can install the RD Session Host, RD Connection Broker and RD Licensing role services on a single server through the Quick Start deployment option.  For an enterprise deployment, we would need to design the infrastructure to deal with availability and scaling concerns.  As with any environment, the complexity increases with the availability requirements – a session-based VDI deployment that requires five¬¬¬ nines availability and needs to support a large concurrent user base will be significantly larger and more complex than a solution for a small business with reduced availability requirements. Let’s examine three sample deployment architectures designed to meet the requirements of organizations of varying size and complexity.

Small Business Deployments

For the small business environment that doesn’t have strict availability requirements and only a small number of thin-client devices and remote users, we can deploy a simple infrastructure. For these types of environments, you can deploy the following RDS role services on a single host by using the Quick Start deployment option: RD Session Broker, RD Session Host and RD Licensing.

While a detailed discussion about sizing requirements is outside the scope of this article, this type of environment is generally suitable for 45-50 simultaneous user connections and using standard line of business applications (e.g. Office, Web Browsers, Thick Client Applications) that are not graphics intensive.

Figure 1. – Basic Small Business Session-Based Architecture

Small Business Deployments with Increased Availability

For the small business environment that has increased availability requirements, the deployment becomes only slightly more complex. Windows Server 2012 introduces the Active/Active Broker feature that allows you to provide fault-tolerance and scalability to the connection broker role service without the requirement to implement complex failover-clustering. Instead, you simply combine both connection broker servers under a single DNS entry.

In this architecture, we deploy both the session host and connection broker role services on two identically configured servers, configure Active/Active brokering and register both session hosts in a new session collection. A SQL Server instance is required to allow the connection brokers to synchronize and track sessions. User connections will be evenly distributed across the session host servers – if a single RDS server fails, users will still be able to access their virtual desktop environment.

If this type of environment needs to support the loss of a single RDS server without degradation in the end-user experience, then the previous recommendation of 45-50 simultaneous users should be followed. If some amount of degradation in the end-user experience is acceptable or the business accepts that a reduced number of users will be able to connect in the event of a server loss, this number could be increased to 75-100 simultaneous users.

Figure 2. – Small Business Session-Based Architecture with Increased Availability

Enterprise Scale Deployments

For enterprise deployments that have strict availability requirements, need to support a large number of thin-clients and remote users and also require the ability to continue scaling out the environment as the number of users increases, the architecture becomes much larger and more complex.

In this architecture, the connection broker role service is deployed on two dedicated servers in Active/Active Broker mode using round-robin load balancing. For connection broker synchronization and user session tracking, SQL Server is deployed as a clustered failover instance on two dedicated servers to provide fault-tolerance. Microsoft testing shows the connection broker role service operating in Active/Active Broker mode on two servers can maintain reasonable endpoint connection response times (< 2.5 seconds) for up to 2,000 simultaneous connections before end-user observable degradation is encountered.

Fault tolerance and scalability is further accomplished by deploying the session host role service on three dedicated servers registered in a session collection with the Active/Active connection broker farm. User connections will be evenly distributed across the session host servers – if a single RDS server fails, users will still be able to access their virtual desktop environment. To meet scalability requirements, the enterprise can simply add additional session host servers to the session collection to support the increasing number of users.

This architecture also provides fault tolerance in Active Directory Domain Services with two domain controllers deployed on dedicated servers.

If this environment needs to support the loss of a single RDS server without degradation in the end-user experience, then the session host servers should not exceed 66% of their total capacity or about 34 simultaneous connections per session host server. If some amount of degradation in the end-user experience is acceptable or the business accepts that a reduced number of users will be able to connect in the event of a server loss, this number could be increased to 125-150 simultaneous users. As stated before, this environment can continue to scale out to meet user load requirements by simply adding additional session host servers to the session collection and if exceeding 2,000 simultaneous connections, adding additional connection broker servers.

Figure 3. – Enterprise Scale Session-Based Architecture


As was pointed out in some of the example architectures, there are a number of factors that can affect the deployment model and architecture you choose and the sizing requirements of the individual role services. You should carefully consider things like application memory requirements, graphic intensive workloads, availability requirements and budget constraints when deciding if desktop virtualization is right for your business.

Every environment is different and not every business will be able to realize cost savings but every business can certainly gain flexibility and enhanced user experience by implementing desktop virtualization. Hopefully this article will help you get you started down the path to providing true flexibility and simplicity to your end-user computing environment.

How can I reset the password for my E-mail account?

DataYard does not keep passwords stored in plain text for security reasons so we are unable to retrieve current passwords. Passwords can be reset by our support staff as needed.

If you have a password that you would like to use please make sure that you provide it to us and that it abides by the following minimum password requirements:

  • Passwords must be at least 6 characters in length and include a minimum of 1 numeral or special character (examples: %$&^&*)
  • Passwords can NOT contain the username. Example if your username is the word ‘test’ cannot be part of your password.


If you have a Hosted Exchange account with us, you can log into your account through the web interface and change your password:

CLEC & xDSL Technology


DataYard has been providing internet access and dedicated WAN services for years, but did you know that as a CLEC, or competitive local exchange carrier, we are one of less than 100 such carriers in Ohio?

There’s a lot of mystery behind what a CLEC is and the relationship that exists between them and the ILECs such as AT&T, Verizon, and other incumbent local exchange carriers. The ILEC is a telephone company made up of one or more Regional Bell Operating Companies, or Baby Bells, that remain since the 1984 breakup of Bell Systems, and maintain and operate local wiring exchanges in their respective regions.

This post aims to demystify some of that relationship, and explain some of the technologies that the CLEC status allows us to deliver to our customers.

CLEC Landscape Summary

So what is a CLEC? CLECs are always in competition with the local incumbent carrier – they offer an alternative to ILECs through a different set of regulations. Where ILECs are required to provide a certain set of services to the public, CLECs have the right to compete in the same market using access to the same facilities and infrastructure.

Practically speaking, CLECs are given rights to place hardware in the local exchange facilities of their market, owned and operated by the incumbent carrier.  CLECs are also granted rights to access individual copper loops as an unbundled network element – that is, a single telecommunications network component, required to be offered by ILECs to competitors at near-cost rates.

The local exchange, or central office, is a facility in which all of the nearby residential and business copper infrastructure terminates. These facilities have stood for years delivering a variety of services (voice, DSL/T1, etc.) over copper infrastructure. For a more in depth look at the rights provided to CLECs, have a look at the Telecommunications Act of 1996 itself, which defined most of the regulation dictating the ILEC & CLEC relationship.

This agreement allows CLECs to offer many of the same services provided by ILECs to end users, using their own hardware, networks, and value add. Despite the competition enabled through the telecommunications act, many CLECs have crumbled or been acquired by the larger LECs over the last decade. Most that have survived have done so by focusing on business customers, and diversifying into cloud and managed services. DataYard has used its CLEC status to supplement and add value to its managed cloud offerings with a variety of xDSL (modern SHDSL, ADSL, and VDSL technologies), T1, and Ethernet services.

xDSL and Ethernet in the First Mile

While a market for TDM based T1, T3/DS3 & Sonet services still exists, in recent years the focus   has shifted heavily towards Ethernet based services delivered to the first mile. These Ethernet in the First Mile or EFM services are often delivered using a variety of xDSL flavors by CLECs (including DataYard) for flexible, reliable, and high bandwidth applications at small to medium sized business. Many users associate the term DSL with a standard residential ADSL modem, with marginal speeds and reliability, but the technology behind modern SHDSL, ADSL2+, and VDSL2 connections has made these Ethernet based services very attractive to businesses. Consider the following speeds achievable using a single copper pair (and more with circuit bonding) with current xDSL technologies when evaluating Ethernet and Metro Ethernet solutions against fiber and wireless solutions.

  • SHDSL – 5.6Mbps download and upload
  • ADSL2+ (Annex M) – 24Mbps download, 3mbps upload
  • VDSL2 (Common 17a Profile) – 100Mbps download and upload

Like all implementations of DSL, these speeds are achieved with a method of modulating bits into a pattern of tones on the analog circuit, to be picked up on the remote end and converted back into bits. Because of that implementation, all xDSL speeds are very dependent on distance from the exchange where the CLEC or ILEC terminates its copper circuits.

While ILECs have been slow to adopt xDSL outside of the traditional ADSL model in the US, CLECs and much of Europe, Asia, and Australia make heavy use of high bandwidth DSL.


While fiber based solutions are going to become more and more common in coming years, copper based Ethernet services are going to continue to deliver a price competitive solution. With standards such as the new G.Fast ITU G.9701 delivering up to 1Gbps over existing copper, we should expect continued growth and offerings from CLECs and ILECs alike. DataYard has been delivering xDSL based, high bandwidth and highly available Ethernet services to businesses for years, standardizing on industry leading Adtran as our vendor of choice.

Domain Name Resolution Introduction


DNS (Domain Name Service) is the process of resolving domain names, such as to it’s IP address, An IP address is the numeric value that a computer connected to a network is assigned, which allows it to communicate with other systems, similar to a telephone number. DNS works as the “phone book” for the Internet by translating hostnames into IP addresses. Without DNS resolution, accessing content on the network would be much more difficult and require us to remember the IP address for many different systems and websites across a network.

What information can be obtained from a DNS lookup?

You can find name server information by running a ‘dig www.domain.tld’ command from a Linux terminal or using a browser based zone lookup.

; <<>> DiG 9.2.4 <<>>

;; global options:  printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5193

;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2


;         IN      A

;; ANSWER SECTION:  86400   IN      A

;; AUTHORITY SECTION:      18320   IN      NS      18320   IN      NS

;; ADDITIONAL SECTION:         3203    IN      A         668     IN      A

;; Query time: 1 msec


;; WHEN: Tue Mar 25 12:55:38 2014

;; MSG SIZE  rcvd: 131

There are several record types associated with a domain. Some of the most common record types are A, MX, CNAME and TXT records. The majority of DNS records are A or MX records. These records are vital to successful domain name resolution as they each serve a different purpose but point the domain to its corresponding IP addresses.

ADDRESS Records (A) – Address records allow for you to point different sections of your domain to different IP addresses or servers. For example, this would be useful for having “” point to your web server’s IP address and “” point to your mail server’s IP address. Each record includes a “Host Name” value, a corresponding IP Address and TTL (time to live) value which tells the system how long to cache the records before updating. You can find out the IP address that a domain name points to by running ‘nslookup www.domain.tld’ in command prompt.

Mail Exchanger (MX) – An important part of the email system are your domain’s “MX” records. MX records tell the world what server to send mail to for a particular domain name. These records include a “Host Name” value, a corresponding IP address and a TTL value. You can set priority on MX records to allow a server to serve as a backup in the case that your primary mail server is not responding.

Canonical name (CNAME) – These are usually referred to as alias records since they map an alias to its canonical name. When a name server looks up a name and finds a CNAME record, it replaces the name with the canonical name and looks up the new name. This allows you to point multiple DNS records to one IP without specifically assigning an A record to each host name. If your IP was ever to change you would only have to change one A record rather than many A records.

DNS queries are answered in a number of different ways. A client can answer a query locally using cached information obtained from a previous query. If the local system does not have cached information it may use an iterative query to find the needed information. An iterative name query is one in which a DNS client allows the server to return the best answer it can give based on its cache or zone data. If the queried DNS server does not have an exact match for the queried name, the best possible information it can return is a referral, which is a reference for another DNS server to check. The DNS client can then query the DNS server for which it obtained a referral. It continues this back and forth process until it locates a DNS server that is authoritative for the queried name, or a time out occurs. An authoritative name server provides actual answer to your DNS queries, this will come from the DNS server that hosts the records for the domain. You can find out the authoritative name servers for a domain by doing a WHOIS lookup on the domain, you can find out more information on WHOIS at A DNS server can also query or contact other DNS servers on behalf of the client to fully resolve the name, then send an answer back to the client. This process is known as recursion.

Domain Lookup Process


  1. The web browser will check local cache on your computer to try and resolve the domain name. If it can get an answer directly, it proceeds no further. You can also override this lookup process by making changes to the hosts file on your local computer, this will allow your PC to override outside DNS information and look for the hostname at an IP address that you’ve specified.
  2. If an answer cannot be obtained from your local cache your system will reach out to your ISP’s recursive DNS servers. You can find out your primary DNS servers by running ‘ipconfig /all’ from the command line and look for the IP address listed next to DNS servers.
  3. These name servers will first search their own cache to see if this domain has been resolved recently or if the server is authoritative for the domain, if so it will return those results.
  4. If this system does not have any cached information it will strip out the TLD (Top Level Domain) and will query a root name server, to find out what name server is responsible for that TLD. Once this information is obtained it will query authoritative server for that TLD for the IP for the domain you are trying to resolve.
  5. The authoritative name server will tell you the absolute records for a domain name. It does not provide cached answers that were obtained from another name server, it does not query other servers it only returns answers to queries about domain names that are configured locally.
  6. The authoritative name servers will respond with the IP address of the domain name you’ve looked up and will return this information to your system.

Breaking down a domain name


When a domain name needs to be resolved the DNS servers will first break the domain name down into pieces and start at the top level domain and follow a path to finding the authoritative name servers for the domain name that you are trying to resolve.


As you can see, the Domain Name System is essential in the use of the Internet. Without this complex system of servers working together you would not be able to simply type in names for websites, but would have to remember the numerical IP Addresses in order to get anywhere on the Internet.

Outlook 2007 will not store Connect Exchange account password.

For Outlook 2007 users; you must be running Outlook 2007 SP3 or no fix will resolve this issue. Once you have installed the necessary patches the issue should be resolved. If this does not fix the issue, please verify that the ‘Logon Network’ security is set to ‘Password Authentication (NTLM)

This setting can be found in tools>accounts>account settings>advanced settings.

NTP Amplification DDoS Attacks

Over the past few months the Internet has seen increased DDoS (distributed denial of service attack) activity which started with DNS amplification attacks and then moved onto NTP amplification attacks. For now the DDoS attacks have stopped, however it’s only a matter of time before the next DDoS attack method is discovered. It’s an ongoing effort for administrators to keep servers patched to prevent these type of exploits. The last one I found myself dealing with was the NTP monlist amplification attack, which used several customers NTP (Network Time Protocol) servers that were available to the public internet. One vulnerable NTP server was generating ~500 mpbs of outbound traffic before we shut its access down.

A DDoS is a distributed denial of service attack in which several computers are configured to flood data to a target. The target’s internet connection can become over saturated disrupting the target’s Internet connection. This in affect can take down any Internet server for the length of the attack. There are several types of DDoS attacks, one of them being a NTP amplification DDoS attack. The latest NTP DDoS attack method allows remote users to trick an Internet facing server running NTP to flood data to a target without having access to the NTP server. These attacks can be kicked off easily and are very hard to trace to the original source of the attack.

By using the NTP monlist command to query an exploitable NTP server’s last 600 associations, an attacker can send a small amount of data to the exploitable NTP server and get back a response back that is several times larger than the original request. When the NTP monlist command is used as intended, it will cause no harm, however when the source IP address is spoofed it can be used for denial of service attacks. If an attacker spoofs the source IP in there NTP monlist request, the response is sent to spoofed IP address. Using a script to repeat the NTP monlist command to an exploitable server, the attacker can generate a very large amount of traffic and target it where they want. Kicking off this type of DDoS requires a small amount of Internet bandwidth and could generate a flood of data hundreds of times larger than the original monlist request. If an attacker is able to identify an exploitable NTP server that has access to a lot of bandwidth, that would be a prime candidate of a large NTP amplification attack. An attacker could use several servers around the internet to query the NTP monlist against an exploitable NTP server with access to a large amount of bandwidth.

Basic NTP Amplification Attack flow

  1. Attacker at IP sends .5 mbps worth of NTP monlist requests, spoofs source IP as IP to exploitable NTP Server.
  2. The exploitable NTP Server responds with 15 mbps of response data to Target at IP not the attacker who is at
  3. The Targets ingress bandwidth is 10 mbps, they are not unable to use their internet connection due to over saturation from the NTP monlist flood running at 15 mbps


  • Attacker is only using half of their available upload to completely take the target offline.
  • This example would be a ntp amplification attack of 30 times the original data sent
  • Repeat this process with several different client and NTP servers and you now have a NTP amplification DDoS attack.

I set up a lab example of an NTP server running with monlist enabled and used ntpdc to issue the monlist command. The request was 234 Bytes and the response generated back was 2676 Bytes, in this example it would be possible to amplify 11 times the original size of the request sent to the server. My test server had 33 NTP associations, while the max that the monlist will respond with is 600. With 600 associations, a much larger response with the same 234 Byte request is possible. Below is a screen shot of a wire shark packet capture that shows the monlist request and response data.

You can see above the Request code. MON_GETLIST_1 and the size of the request under the Length column is 234 Bytes



The response from the monlist request was 6 UDP packets, 5 that were 482 Bytes and the last one was 266 Bytes. If you add up the number of total bytes in those 6 packets the total is 2676 Bytes.

If an attacker spoofs the source address in the NTP monlist request, the returned data is sent to a different server. If the attacker continues to make this request with a spoofed IP address, the NTP server will keep sending the response to the victims’ computer. This amplification attack was recorded as CVE-2013-5211 in the middle of January 2014. A lot of these attacks could have been prevented if all internet providers implemented BCP38. BCP38 is also known as RFC 2827 was actually written up almost 14 years ago. It’s a best current Practice that recommends filtering traffic that should never been seen from a user. This would for example drop any traffic with a spoofed source IP address that is not in the valid range provided to the user. DataYard implements this using a combination Cisco ACLs and Cisco Unicast Reverse Path Forwarding.

How to see if your Server is vulnerable?

  • There is an open project that was started to help identify vulnerable NTP servers, you could have your public address range scanned as long as it’s smaller than a /22 (1024 Block size).
  • You can use ntpdc in linux and issue the monlist command to see if you get a response.
Example NTPDC output of a NTP Server that is vulnerable
  • NMAP has a useful script that can be used to see if a server is responding to the monlist ntp request as well.

nmap -sU -p U:123 -n -Pn –script ntp-monlist
|   Target is synchronised with xx.yy.61.67
|   Public Servers (1)
|       xx.yy.61.67
|   Public Clients (1)
|       xx.yy.177.51
|   Other Associations (13)
|       xx.yy.100.2 (You?) seen 5 times. last tx was unicast v2 mode 7
| seen 78664 times. last tx was unicast v0 mode 0
|       xx.yy.177.108 seen 4 times. last tx was unicast v2 mode 7
|       xx.yy.129.66 seen 1 time. last tx was unicast v2 mode 7
|       xx.yy.203.115 seen 7 times. last tx was unicast v2 mode 7
|       xx.yy.95.174 seen 1 time. last tx was unicast v2 mode 7
|       xx.yy.253.2 seen 3 times. last tx was unicast v2 mode 7
|       xx.yy.54.31 seen 1 time. last tx was unicast v2 mode 7
|       xx.yy.177.66 seen 3 times. last tx was unicast v2 mode 7
|       xx.yy.81.113 seen 1 time. last tx was unicast v2 mode 7
|       xx.yy.54.103 seen 2 times. last tx was unicast v2 mode 7
|       xx.yy.244.49 seen 1 time. last tx was unicast v2 mode 7
|       xx.yy.230.75 seen 1 time. last tx was unicast v2 mode 7

Please Patch your NTP server!

There are still many unpatched NTP servers out on the internet that can be used in future DDoS. If your NTP server is responding to the ntp monlist command, you should upgrade to a later version of ntp. If you not able to upgrade your ntpd process there are several examples online that show how to lock down or even complete disable the monlist command.


OpenSSL Security Vulnerability: Heartbleed

OpenSSL Security Vulnerability: Heartbleed

Late yesterday, a vulnerability in the OpenSSL libraries, CVE-2014-0160, was announced. The OpenSSL libraries are used to provide the secured or encrypted connections for web stores like Amazon or EBay, banks, and other sites like Google, Facebook, and Twitter. This vulnerability would allow attackers to learn the private keys used to encrypt and decrypt the secured information.

Several of our servers were affected by this vulnerability, including our Linux Fusion platform and Connect webmail interface. We have updated all vulnerable services but strongly recommend that all customers with SSL enabled sites get the SSL certificates revoked and re-issued. Some customers may see warnings when connecting to SSH/SFTP for the Linux Fusion platform as we have also re-generated the keys for SSH/SFTP. If you have any questions or concerns please contact support at 800-982-4539 or by email at

For more information on the vulnerability please visit: or