Outlook 2010 starting in Safe Mode

If you are experiencing an issue with your Outlook 2010 starting in Safe Mode after running Windows Updates on or after 12/8/2015, here are the steps to resolve this problem.

Uninstall the KB3114409 Outlook 2010 update:

  1. Open Installed Updates by clicking the Start button Picture of the Start button Outlook 2010 fix, clicking Control Panel, clicking Programs, and then, under Programs and Features, clicking View installed updates.
  2. Select the update KB3114409 to remove, right click, then click Uninstall. Administrator permission required for Outlook 2010 fix If you’re prompted for an administrator password or confirmation, type the password or provide confirmation. (If you have an issue finding it in the list you can use the search in the top right of the screen)
  3. Restart Computer.
  4. Open Outlook 2010.

If you are asked if you would like to start in Safe Mode, select NO and delete the shortcut you launched Outlook from and add it back.

 

Hack Your Own IOT Thing

On November 18th, 2015, DataYard participated in the 9th Annual Taste of IT put on by Dayton’s Technology First at Sinclair’s Ponitz Center. It was a great show — well attended, and lots of energy. Many thanks to the organizers, Ann Gallaher and Michelle Marek, as well as the many volunteers and speakers who helped put on a wonderful technology event here in the heart of Dayton.

Taste of IT 2015
Taste of IT 2015

I had the pleasure of doing a presentation in the afternoon on a topic that has really captured my imagination: the ideas behind IOT, an “Internet of Things”. Lately, I’ve been exploring the concepts behind network-connected Things, and building my own prototypes to bridge physical and virtual realities.*

The intention of my talk was to briefly describe at a high level how Things work, and then I dove into some of the hardware, software, and apps that make building Things relatively easy.

For those who are interested, I’ve uploaded my slides and presenters notes to SlideShare.net. Feel free to contact me with questions or project ideas.

David Mezera

* Yes! The picture in the banner is a Thing I recently built and tested for our data center — an underfloor pressure and temperature sensor that streams data out via WiFi to Blynk. It uses the Sparkfun ESP8266 Thing Dev Board and the Sparkfun MPL3115A2 pressure/altitude/temperature sensor.

How can I configure Outlook 2013 to work with Connect Mail?

First, you will need to decide if you wish to set this account up as a POP account or an IMAP account, if you are unsure you can see the differences here and use the settings for your chosen account type.

 

Select the File tab

outlook2013_1

Then choose Info

Select Add Account

outlook2013_2

 

Choose Manual setup or additional server types and then Next.

outlook2013_3

 

Choose POP or IMAP and then Next.

outlook2013_4

 

FOR POP3:

You will then need to enter the following information for POP3:

  • Your Name: Your name as you wish it to appear
  • E-mail Address: This is your full email address or Alias
  • Account Type: POP3
  • Incoming mail server: pop3.donet.com
  • Outgoing mail server: smtp.donet.com
  • User Name: the username provided to you with your account information
  • Password: the password provided to you
  • Require logon using Secure Password Authentication (SPA): Should be unchecked

outlook2013_5

 

You will then need to select the More Settings button in the lower right hand corner of the screen

Outgoing Server tab: My Outgoing Server Requires Authentication must be checked, using the option Use same settings as my incoming mail server

outlook2013_6

 

You will then go to the Advanced tab and enter your server port information

POP3:

  • Server Type: POP
  • Server Address: donet.com
  • Port: 995
  • Use the following type of encrypted connection: SSL
  • Authenticate using: Clear Text
  • Logon User Name: Your username
  • Outgoing Server Information
  • Server Address: smtp.donet.com
  • Port: 587
  • Use the following type of encrypted connection: TLS

outlook2013_7

Once this is complete you can hit OK, which will return you to the main screen. Hit Next to test the account settings. If the test completes successfully, you can hit next to exit the setup wizard.

 

For IMAP

You will then need to enter the following information for IMAP:

  • Your Name: Your name as you wish it to appear
  • E-mail Address: This is your full email address
  • Account Type: IMAP
  • Incoming mail server: imap.donet.com
  • Outgoing mail server: smtp.donet.com
  • User Name: the username provided to you with your account information
  • Password: the password provided to you
  • Require logon using Secure Password Authentication (SPA): Should be unchecked

outlook2013_8

 

You will then need to select the More Settings button in the lower right hand corner of the screen

Outgoing Server tab: My Outgoing Server Requires Authentication must be checked, using the option Use same settings as my incoming mail server

outlook2013_6

 

You will then go to the Advanced tab and enter your server port information

IMAP:

  • Server Type: IMAP
  • Server Address: donet.com
  • Port: 993
  • Use the following type of encrypted connection: SSL
  • Authenticate using: Clear Text
  • Logon User Name: Your username
  • Outgoing Server Information
  • Server Address: smtp.donet.com
  • Port: 587
  • Use the following type of encrypted connection: TLS

outlook2013_9

 

Once this is complete you can hit OK, which will return you to the main screen. Hit Next to test the account settings. If the test completes successfully, you can hit next to exit the setup wizard.

If you have any questions please contact DataYard Support.

It’s a wrap: NodeBots 2015

NodeBots 2015 was a “smashing” success!

The NodeBots 2015 crowd!
The NodeBots 2015 crowd!

The event, sponsored by Sparkbox, DataYard, and Gem City JS, gave folks the opportunity to experiment with an Arduino, motors, a simple robotic frame, and Javascript code (via NodeJS) to glue it all together and make a functional robot. Everyone came up with a different robot using the assortment of materials provided, and it was crazy seeing what everyone’s imagination produced. Amazing stuff!

DataYard's bot entry, "Yellow Jacket".
DataYard’s bot entry, “Yellow Jacket”.

The DataYard bot, “Yellow Jacket”, battled very well but was eliminated in the final round against “Tarnation”. Wherever their bots finished, everyone seemed to have a great time learning and competing.

I’m already looking forward to NodeBots 2016!

The Conductor: New Tool to Manage DataYard Services!

conductor_hat
The Conductor

I’m excited to announce that yesterday we pushed out version 1.0 of The Conductor, the web portal we developed to equip customers with the tools to perform basic administrative functions on their DataYard services.

We only have support for Connect Exchange services built into The Conductor today, but that’s what we set out to accomplish with this version. The Conductor allows our customers to do the self-service tasks people expect to be able to do on their own: create mailboxes, delete mailboxes, change passwords, adjust settings, and the like — any time, day or night.

Good security was a huge part of our develop effort for the interface, so users of the The Conductor have to be flagged by their unique email addresses as admins for their particular domains. DataYard customers who already have Connect Exchange and want access to the tools should contact our support staff by phone at 937-226-6896, or email them at support@datayard.us. We’ll enable your access after confirming your identity as a domain admin.

By the way, Connect Exchange is DataYard’s implementation of our hosted Exchange offering. It’s perfect for organizations who need to share calendars and contacts, or use multiple devices and need to synchronize communications across desktops, laptops, tablets, or phones. The infrastructure for the entire platform is here in Dayton, Ohio, so your important data is “offsite, but not out of sight”. If you’re interested in learning more about this service please contact our sales department. We’d be happy to talk through the details with you.

NodeBots 2015

I’m thrilled to announce that DataYard is a sponsor for GemCityJS’s local “NodeBots” competition on July 25!

In a nutshell, participants will get a bag of parts when they check in at the Firefly Building, the venue hosting the event. The parts bag will have an Arduino, some motors, a motor control board, a T-shirt, and other goodies. We’ll also have tools and materials available for everyone to use. The instructors will give some basic demos on how to use an Arduino to control hardware, and then participants will be off to the races to build a Javascript-controlled robot. We’ll have pizza for folks to eat, more building, and then a head-to-head “battle bot” style competition to determine the winning robot.

The tickets are $35 now, going up to $45 as the day approaches — just to defray costs. Everything participants build will be theirs to keep and tinker with after the event.

I think anyone, adult or teen, could have a lot of fun doing this, and no advanced Arduino or programming experience is required. As the NodeBot web site says, “The only prerequisite to attend is a desire to build.”

Come on out and have a blast learning something new, building something cool, competing to win, and meeting new folks in the Dayton Maker community!

To get more details online visit http://gemcityjs.com/nodebots.

“Venom” Vulnerability Details Released

This week the “Venom” vulnerability was announced, affecting a number of virtualization systems, like Xen, KVM, and VirtualBox (http://www.zdnet.com/article/venom-security-flaw-millions-of-virtual-machines-datacenters/). Hackers can use the defect to exploit flaws in code written more than 10 years ago, a virtual floppy disk controller, to shut down the hypervisor. With the hypervisor disabled, a hacker would then able to access the virtual machines of other people or companies running on the same server.

Prior to Wednesday’s announcement software makers developed patches to close the door to the exploit, but not all hosting providers have been able to roll the patch out to their affected systems. As a result, a number of virtualization platforms running these distributions remain vulnerable to possible exploits.

Since our systems are built on VMware, DataYard’s cloud infrastructure is not vulnerable to this exploit. Microsoft’s Hyper-V and Bochs are also not affected by this bug.

We’re Hiring! Receivables & Payables

We’re Hiring! Receivables & Payables

DataYard is seeking an Accounts Receivable / Payable Clerk to maintain customer profiles, issue invoices, post payments, collect past due accounts and respond to customer inquiries (by phone and e-mail). In addition, the successful candidate will reconcile recurring vendor invoices and enter them for payment.

Requirements

  • High school diploma. Additional accounting coursework is a plus.
  • Minimum of 3 years of experience in an accounting environment.
  • Experience working with receivables and collections. Payables experience is a plus.
  • Good communications skills and the ability to work with people as clients / customers.
  • Working knowledge of Excel, Word, Outlook and accounting computer systems.

Functions

Receivables

  • Setup and maintain customer profiles and billing records, including sales tax records
  • Process daily invoice batches
  • Process manual invoices as needed
  • Process incoming payments (checks, credit cards and electronic transfers)
  • Analyze past due accounts, contacting customers as needed
  • Respond to customer billing inquiries
  • Coordinate with sales to ensure customers are billed in an accurate and timely manner

Payables

  • Reconcile regular vendor invoices for accuracy, investigating discrepancies
  • Enter regular payables invoices for payment

 

This is a 20 to 30 hour per week position.

E-mail resume to hr@datayardworks.com.

Industry “Best Practices”?? I Don’t Care!

A staple of the information technology industry is the notion of a “best practice”. I’m not very fond of it.

Best practices are supposed to be the the golden standards by which technological solutions are implemented and problems resolved in the surest way possible. They’re often researched and developed by the vendor who produced the technology, and so they come with a certain weightiness. For example, if Microsoft says that an Exchange environment needs to be built a certain way for a certain number of users to work well, the conventional wisdom asks, “who am I — a lowly user of their technology — to disagree?”

There’s an irony in this. For years, I have heard technology experts complain that Cisco certification exams reflect perfect-world environments that don’t exist. They’ve said that they were only able to pass their exams by answering questions the way Cisco says you should do things, not the way they would actually solve networking problems in real life. This is just one example with one vendor, but it brings to the surface the familiar conflict between book learning and hands-on experience. People in the business know that the two solution paths are often different.

Therein lies the paradox. When it comes to certification exams, our experience tells us that the textbook solution, the “best practice”, may not genuinely be the best way to address a problem. It bugs us when these disconnects happen, but we play along with the vendor in order to get the certification and their stamp of approval. Yet, when we face a real world problem in search of a solution, we tend to seek out the industry whitepaper on the best practice and give it special reverence.

Whitepapers serve their purpose, but they’re crippled from the start as guides to perfection. First of all, who gets the special privilege of defining the criteria for all that is required to be “best”? Is there any room in the determination of what is best for a customer’s hard budget constraints, deployment timelines, and flexibility? Or is an industry best practice limited from the very beginning because it starts with a rigid problem specification that doesn’t match a real technical challenge, assumes unlimited access to resources and time, and assumes a pristine lab environment in which to operate?

Experience has taught me that best practices are merely templates to start from and nothing more. They are just tools that give us a benchmark to work from, and maybe they establish some realistic performance expectations. However, a person’s real-world experience deploying and understanding technology is always infinitely more valuable to me. Over-reliance on vendor best practices can be seen as forever leaning on a technological crutch. Saying that “vendor’s so-and-so’s best practice in this situation is to…” may appear to add credibility to a course of action, but it can also stifle experimentation during problem solving, innovation, and independent thought. We need more than just the ability to read and regurgitate our Google search results.

My advice? Read the whitepaper, read the best practice. Bask in the information presented, and then put it aside and be a critical thinker. Technology professionals are fantastically innovative, and they need to trust their own experience and imagination to solve their own unique problems, perhaps in even new ways that make sense to them, their customers, and their employers. No new thinking ever came out of blindly following a vendor best practice.

With Great Power Comes Great Responsibility

The technological landscape continues to evolve at a fantastic pace, and staying on top of it all can be challenging. In spite of the high rate of change I think there are some “timeless” lessons we’ve learned over the last two decades, lessons that will continue to be true for the foreseeable future. Here are three lessons that are part of our DNA today and are integrated in our daily thinking.

The first is that the demand for robust, high-performance Internet access and applications consistently increases. It never shrinks. Our clients today are getting much more comfortable taking their applications off-site and into the cloud, so reliable, fast, low-latency connections to the network are becoming increasingly vital to daily operations. Furthermore, our users are connecting to their data using a dizzying array of devices, applications, and APIs from a diverse number of geographic locations. This trend is only going to continue as more computing power is loaded into smartphones and tablets, and small-footprint IoT (Internet of Things) devices like Arduinos and Raspberry Pis multiply.

The second is that good data and application security cannot be an after-thought. Protecting data, and your users’ access to it, has to be an important element of the system from Day 1. Good security is not something you do once and then assume you’re done, nor is it something you bolt onto an already-built system. Good security requires processes that are enforced, systems and software that are monitored around the clock, and software updates and security patches — at least at the operating system level — for the lifespan of the application. Failing to take security seriously from the onset means that your critical systems might be exposed to potential compromise, and that critical business data might be corrupted or lost.

Thirdly, a tremendous amount of planning and care is needed to integrate new Internet services into a client’s enterprise with nearly zero downtime to the end user. This cannot be done haphazardly. It requires knowledge of a client’s working environments, their online habits, their schedules, their processes. It requires critical thinking and the judgment skills necessary to weigh competing priorities to help create installation plans that minimize negative ripple effects when new systems are brought online. It requires the ability to communicate excellently, both on a technical and an operational level. A client can’t have a positive technology experience if they don’t understand what’s going on, if they don’t know who is leading the project, or if they never know where they are in the process.

For the last few years I’ve used a line from a superhero movie to describe the importance of the role we at DataYard play on behalf of our clients: “With great power comes great responsibility.” We take the management of our entire infrastructure, and the management of individual client applications from end-to-end, very seriously. When you have the power to bring an enterprise’s technology to a screeching halt you tend to open technical doors very carefully. You only open those doors when you absolutely have to. You do it with a purpose, and you know — in advance — exactly what you’re going to do when you’re on the other side. To be careless with a client’s applications or data only invites disaster.

Nobody likes disasters, including technological disasters. Responsible technologists avoid disasters by first imagining all the things that could go wrong. Then they use their position and influence to mitigate those risks one by one through good processes, building in capacity and redundancy, and preparation prior to plan execution. To do anything less is a disservice to your users.