Rogue Wireless Devices – The Growing Threat to Your Organization

Rogue Wireless Devices – The Growing Threat to Your Organization

The primary goals of any corporate network are consistency and reliability. Consistent network performance helps avoid unnecessary downtime, improves productivity and reduces total cost of ownership (TCO), but threats to that reliability are coming from an ever-increasing number of sources.

With continued advances in wireless technologies, there are many more employees working remotely using personal devices. They may be at customer locations or a home office and some even work while on vacation. These personal devices access a variety of wireless networks outside your corporate network. There is no doubt that this wireless freedom increases productivity and provides a level of autonomy to employees, but with this access, there are increased risks to the corporate network.

Of all of the threats faced by your network security, few are as potentially dangerous as the rogue access point.  

What exactly is a rogue access point?

A rogue access point is an unauthorized device operating on a corporate wireless network. The device is often a cell phone or tablet. Potential problems arise when the device discovers the company’s wireless network which creates an access point. Although this can be considered a security breach, it typically is not implemented maliciously. These breaches usually come from an employee looking for a convenient way to use the company’s wireless network.  And, it is not just cell phones. A rogue access point could be a WLAN card plugged into a server or a mobile device attached to a USB that creates a wireless access point. Other unauthorized wireless devices may be hidden inside a computer or other system component, or be attached directly to a network port or network device, such as a switch or router.
For instance, an employee working at a customer site uses their cell phone to connect as a “hot spot” to their computer for a company presentation. They return to the office still connected not realizing their RF signal interferes with the corporate network. In another case, an employee puts an unencrypted wireless access point in the conference room for a customer project. It is well-intentioned; but they do not realize that their access point could be used by a hacker to enter the corporate network invisible to the company’s internal network monitoring. 
Although, not typically malicious these access points can open up the corporate network to security threats. For instance, an employee uses their cell phone at lunch to download a web app. The app contains innocuous malware designed to quietly collect information. This malware then reads stored data like emails, text messages, attachments, credit card numbers, and log-ins and passwords to corporate networks. The employee returns to the office, accesses the company wireless and unwittingly contaminates the corporate network.  

Increasing Risk

In addition to cell phones and tablets, the internet of things (IoT) is introducing new devices that are a growing risk to your network, including wearables like FitBit and Apple Watch. Although manufacturers’ security protocols are constantly being revised and upgraded, new IoT devices are constantly entering the market presenting new threats. According to research from Gartner, by 2020, experts estimate that more than 25 percent of identified enterprise attacks will involve IoT. Yet, IoT will account for “less than 10 percent of IT security budgets”.

Rogue access can occur in any type of organization and cause performance issues that are hard to identify due to the nature of wireless connectivity.  Many factors affect a wireless signal, including RF interference from signals using the same frequency as a wireless access point and the number of users connecting to an access point. Either of these can affect the overall throughput and performance of your entire wireless network.

Solving the problem

Threats from mobile devices are increasing and can result in data loss, security breaches and compliance violations. Solving the problem depends on the scale and size of the organization and can include risk assessment, policy changes, as well as technology implementation. To discover and monitor unauthorized access points, it takes diligent observation, the right tools, and a bit of intuition that comes from experience.

Rogue wireless devices threaten the quality and consistency of service to your customers as well as the reliability and security of your network. Do you have an unidentified network performance issue? You may need some expert help.

 

 

Navigating the Challenges of Outdated Infrastructure

Navigating the Challenges of Outdated Infrastructure

As industry challenges and competition escalate, businesses need to constantly evaluate their systems and infrastructure to improve efficiency and productivity. A competitive advantage means staying on top of technological advancements that impact your industry and implementing technologies, not just as a way to improve internal processes, but also as a driving force for business growth. 

Efficient use of data can help companies become more agile and better equipped to respond to ever-changing markets. This means scalability and systems that communicate with one another throughout the organization from operations to production to supply chain.

Enterprise resource planning (ERP) systems can provide these benefits including real-time capabilities, seamless communication and an overall increase in efficiency. However, since ERP implementation affects the entire organizational process, there are a number of challenges that companies may encounter when upgrading their network.

Our team of consultants, architects and integrators were brought in on a consulting engagement to help one of our clients navigate the technology upgrade options best suited for their particular business and environment.

One of the challenges we encountered was outdated infrastructure—a common issue today as technology continues to evolve rapidly and companies face a multitude of options as they plan for growth.

This client was a well-established company with multiple locations and business units across the USA. The company’s current systems that orchestrated every step in the production and operational process were either home-grown or stand-alone disparate systems acquired through acquisition. This led to a difficult environment where systems did not communicate with one another and the supply chain itself was not integrated.

The company wanted to consolidate all aspects of the supply chain and production into a single platform that would deliver shorter lead and production times, and a more consistent product at a lower cost to produce.

This manufacturing company was growing fast and needed to move beyond their home-grown systems with a new datacenter that was scalable, flexible and could support innovation and future growth.

Our initial approach was to assess the client’s current environment and growing corporate needs. We first delivered a series ZAG TechTalk workshops to educate the company on their options including hardware and servers, software upgrades, integration of new systems, and cloud services versus on-premise deployment platforms.

Cloud can be a great solution for some companies. Other company’s business needs lead to a more traditional on-premise solution. As experienced integrators, we are well versed in both, and we guide our clients to the best solution for their needs. In this case, based on the customer’s business requirements, it was decided to keep the infrastructure in-house.

We recommended new servers and deployed Cisco Nexus® datacenter switches, a fast and reliable switching infrastructure designed for high performance and increased data center efficiency. We also deployed a Nimble Storage Array that would allow for future scalability.

It took six weeks to get the new systems up and running—a fast transition for such an environment. To assure a smooth transition and in-house management, we trained an on-site engineer and continued to provide level 3 support.

The company realized immediate benefits through the automation of the supply chain, which lead to more transparency, less waste, better quality control, and ultimately a more profitable business.

The company was able to capitalize on the new integrated system efficiencies with improved product margins in every business unit, while differentiating itself from their competition by providing a streamlined process and continued high quality product delivery.

The challenges presented by the company were common issues in a unique environment but the solutions are ever-changing. New vendors, platforms and services are being introduced almost weekly.

As experts in IT integration, ZAG Technical Services, along with the support of our vendor partners, brings the latest options to every client. Our goal as experts in technology, is to be a reliable partner that enables our clients to succeed.

 

Business Continuity: Reducing Downtime

Business Continuity: Reducing Downtime

Experiencing IT downtime is inevitable for any business, but the threat of downtime can be reduced by having a Business Continuity Plan that includes Disaster Recovery. 

Businesses with any kind of IT infrastructure experience downtime in two ways: scheduled downtime and unplanned downtime. Scheduled downtime is important for executing updates and troubleshooting systems so that they run properly and as intended. Unplanned downtime is a system crash that leaves whole systems or machines out of action and unusable. This can be detrimental to organizations in many ways and can cause your business to lose money. 

The Cost of Downtime

Most IT environments will experience downtime at some point due to a variety of causes. For this reason, it is critical to have a Business Continuity plan for downtime, so your company does not stay down for long. 

For enterprises, the average hourly cost of an infrastructure failure is $100,000 per hour, according to an article on the Dev Ops website.

Organizations must also consider the cost of employee’s wages during downtime.

“If the company has 10,000 employees who are paid an average of $56 per hour including benefits, the labor component of downtime costs alone would be $896,000 a week, which works out at over $46 million per year,” according to a blog post by William Thompson.

Causes of Downtime

The Uptime Institute has reported that 88 percent of unplanned downtime is directly related to human error and mechanical problems.

This report states that 29 percent of downtime is attributed to UPS System failure, meaning the battery for these machines has gone out. Five percent came from IT equipment failure and 10 percent was accredited to generator failure.

Additionally, 24 percent of downtime is triggered by people-caused accidents, and only 12 percent is caused by weather incidents.

Disaster Recovery vs Business Continuity

Disaster Recovery is getting back up when you are knocked down by an IT disaster; Business Continuity allows you to only stumble after an IT Disaster and avoid getting knocked down by it.

Three out of four companies fail from a Disaster Recovery standpoint, according to the Disaster Preparedness Council, which means that even more lack a Business Continuity Plan.

Even the US government provides information on Business Continuity and templates to create your own along plan through instructional videos from FEMA.

Forbes has listed four main reasons to create a Business Continuity Plan:

  • Reduce Interruptions—rather than dealing with problems and issues individually, organizations can minimize downtime
  • Limit Damage—while you may not reverse the initial damage, you can prevent more data from being lost or stolen
  • Create Alternatives—if something goes down, there are set alternatives in place that can be used by employees and administrators to continue working.
  • Guarantee Employee Responsiveness—the most important part of any Business Continuity Plan is making sure your employees are all on the same page, or you may find that employees do not know how to react during an IT disaster.

Companies have to determine the amount to allowable data loss, or Recovery Point Objective (RPO), as well as the amount of time your company can realistically be down, or Recovery Time Objectives (RTO).

According to InfraScale, a Disaster Recovery as a Service company, 95 percent of businesses experience outages for reasons unrelated to natural disasters. Additionally, the average time it takes organizations to recover from a disaster is 18.5 hours. 

FEMA has reported that 60 percent of companies shut down six months after a data loss disaster.

Real World Scenario

Recently, a ZAG client had an extremely critical Citrix Server in their environment go down. When we investigated the problem, we were unable to definitively determine the root cause and could not bring that server back online.

In the past, we would have either physically rebuilt the server or recovered the server from a tape backup. Rebuilding could take over 24 hours to complete and would require much cleanup and troubleshooting; restoring from a tape backup would have taken at least 4 hours.

However, this client had a Datto SIRIS device in place, and we were able to recreate the broken server’s virtual hard disk. Using the Datto enabled us to bring the server online in under 30 minutes.

Businesses must understand that it is equally important to establish a system that can prevent disasters as it is to have a system in place that keeps your company running after and during a disaster. This is why ZAG always recommends having a Business Continuity Plan in place.

To learn more about Datto, or to set up a Business Continuity Plan, contact ZAG today for more details.  

 

Virtualization is Redefining Business: Healthcare

Virtualization is Redefining Business: Healthcare

Virtualization is now redefining business and will be crucial to the long-term evolution of any agile digital organization across all verticals. Virtualization can create new economics in terms of data center consolidation, user experience optimization, and security.

Organizations today are under constant pressure to do more with less. One way to do that is to efficiently enable your workforce with the best tools available from any network and any device. 

In a healthcare environment, that workforce may be primarily inside hospital walls, where mobility means a faster and transparent computing experience for its practitioners. More data means less time. 

This mobility between devices within the hospital means more patients can be seen by doctors and nurses with fast access to the same full-range patient data. Server virtualization technology can provide this flexibility, while delivering cost optimization and process efficiencies within the organization that improves patient care. 

Key virtualization drivers for the medical industry include: the move towards electronic medical records (EMR) deployment, support for an ever-increasing number of personal mobile devices and providing secure access to sensitive patient data for authorized individuals (part of HIPAA compliance).  

One example of this application and deployment is provided by a large healthcare facility in the Northwest. This hospital provided extensive technology resources to their staff, including clinical databases and real-time patient monitoring systems; however, the healthcare professionals were constrained in their ability to put these resources to work for their patients, because more time was being spent interacting with technology than with patients.

To prepare for a patient visit, both doctors and nurses had to log in to a computer in the team room and then open and log in to several applications in order to browse various types of patient data. Once they moved to the patient room, they had to do it all over. With only 20 minutes of time available to see each patient, this limited the quality of patient interaction, which impacted patient care and decision-making.

In this hospital, there were more than 4,000 computer workstations. Some were stationary (sitting in an office), and others were mobile (shared devices that were rolled from patient to patient). 

IT support was a constant challenge, with many of the problems often being repetitive issues and common errors. The hospital needed a better solution: how to provide security anywhere, on any device, while improving patient interaction, quality of care, and increasing profitability by implementing more efficient processes.

The hospital chose a hosted virtual desktop infrastructure delivery model (hosted VDI), powered by Citrix XenDesktop™, using on-demand apps, with Citrix XenApp™, to virtualize the hospital’s clinical environment that included an electronic medical records system (EMR) and more than 300 other applications. 

By virtualizing the entire desktop, provisioning speed is accelerated, user mobility is increased and log in times are minimized. With desktop virtualization in place, the healthcare practitioners now use single sign-on (one set of user credentials used across multiple platforms) to access their desktop and applications using a zero client. 

The process takes only seconds and allows medical professionals to have longer preparation time before seeing the next patient. Once in the patient room, they log in again; but this time, they are immediately directed to the exact same desktop state they just left with the full information and patient context are already displayed. The whole process takes seconds without interruption from one desktop to the next. 

Patient interaction is immediate, because now doctors can readily and instantly access information that they normally might have skipped due to additional required application log ins. Patient information is now at their fingertips wherever the practitioner goes.

What about practitioners outside the hospital that require access? With the Citrix Solution they implemented, doctors, nurses and administrators that need access from their personal devices and networks can easily download and install Citrix Receiver™ to access their device. Citrix NetScaler® provides secure remote access both inside and outside the hospital. 

From the hospital’s perspective, virtualization has streamlined the process for updating or installing new applications. Since the process is exactly the same for every virtual server, it is now possible to deploy far more servers in a more consistent fashion that is quicker and uses fewer staff resources. 

Like this hospital, many companies see the need for Mobility in the workforce, but many do not know how to adopt and implement BYOD initiatives or how to ensure secure access to apps and data from any network. Organizations need to educate themselves on Virtualization best practices and security measures in order to implement them successfully. 

Email Security: Fight Spear Phishing with DMARC

Email Security: Fight Spear Phishing with DMARC

The Art of Spear Phishing

In the past, users may have been requested to go to their company website via a link embedded in an email to provide personal or corporate information. There was a good chance both the email and website were not valid.  At that point, critical information may have been compromised and used by these “hackers” to obtain confidential information.   

Now fast forward to the future of today’s business cybercriminals.    
 

“Spear-Phishing”       

     
“Spear Phishing” perpetrators target specific people within an organization as opposed to sending out emails to mass users in the hopes that some will respond.  The new cybercriminals send out what looks like a real email from an actual user name within the company. Company websites and Social Media come into play as a resource for this information.  The receiver is likely to be a person within the financial department that can process payments and/or other financial transactions.  
The receiver sees an email from the supposed Executive requesting, perhaps, a wire transfer of funds to the company’s client. The email received may also look like it is coming from the Executive’s personal email account. The email would most likely request that a wire transfer of funds to a known customer or client be performed, which is normally a legitimate request. The account information embedded within the email would contain the perpetrators account information that is temporarily valid until the scam is reported.         . 
Spear-Phishing depends on 3 items.

  • The sender must be known. - An Executive CEO, CFO, CTO, etc.
  • The embedded info in the email looks legitimate. - Logos, even noted people within an organization
  • The request also falls into the legitimate arena. – From the CEO or CFO - “Please wire money to our client”

An example of a typical Spear-phishing attempt is below:

John Smith – CEO of Our Company – Found on the Website
Becky Thomas – working in their Finance department. – Found on Social Media 

 

Statistics from the FBI – Krebs on Security


In January 2015, the FBI released stats showing that between Oct. 1, 2013 and Dec. 1, 2014, 1,198 companies lost a total of $179 million in business email compromise (BEC) scams, also known as “CEO fraud.” The latest figures show a marked 270 percent increase in identified victims and exposed losses. Taking into account international victims, the losses from BEC scams total more than $1.2 billion, according to the FBI. 
                                                                                                                                                   While email threats continue to rise, recent data shows that establishments across assorted fields are protecting their total environment with the DMARC protocol. 

What is DMARC


DMARC, which stands for Domain-based Message Authentication, Reporting & Conformance, is an email authentication protocol. It builds on the widely deployed SPF and DKIM protocols and adds a reporting function that allows senders and receivers to improve and monitor domain protection from fraudulent email. More companies are implementing this protocol such as AT&T, Comcast, Yahoo, Facebook, and Microsoft.

Image taken from The Register 

Image taken from The Register 

DMARC is a great defense against Spear-Phishing and stops many of the most common attack methods. It is also a free service; however, implementing DMARC should be done conservatively and with a watchful eye.  There is a high risk of false positives.   

Implementing DMARC


Preparation: 
Create a DMARC record on your public DNS entries. This tells your organization and others what to do with bad/fake mail that is pretending to come from your organization. If you don’t already have SPF and DKIM entries, these will also be created at this time.

Monitoring: 
For about a month, someone needs to monitor the deployment to identify false positives in order to catch all the reports that your partner organization sends on your behalf (that don’t go out that often), or to identify a server that sends email through irregular routes. This can take anywhere from few hours to a couple of weeks to sort through the initial flagged emails and separate the good from the bad. During this time, adjustments are made to make sure that the legitimate emails will fall under the scope of the implementation.

Deploy and maintain: 
Once things have settled into a steady state where there are few or no false positives, the DMARC record is updated to tell other servers to quarantine or reject bad messages. Since no company’s environment is static, DMARC includes a provision to send a log of flagged mails to your administrator. This allows any future necessary adjustments to be identified and addressed without the need for a regular manual check-in. 

There are tools out there that will help thwart these types of email attacks.  However, these tools are not the “End All” solution. End-users should be cognizant and diligent with any communications that come into their company by double checking (in person or phone calls) to verify requests such as wire transfers.  After all, “fool me once, shame on you, fool me twice shame on me.”  Only problem is that once is all it takes.   

Avoid Buying Excess Email Licenses

Avoid Buying Excess Email Licenses

When operating at the Enterprise level, it is easy to lose track of how many email accounts your company has and how many of those are being used. This becomes problematic when purchasing licensing, because there’s no point buying licenses for idle users.  

One of Zag’s clients realized this when conducting their Microsoft Enterprise Agreement True Up; they needed accurate reporting in order to ensure that they are not purchasing more licenses than necessary under their Enterprise Agreement.

Our client asked us for a list of all Office 365 mailboxes, the license applied, and the last login. This is not something that Office 365 keeps track of or reports on, so we couldn’t simply pull the information from the source.  

A ZAG Solution Architect came up with the idea to write a PowerShell script using multiple Office 365 administrative interfaces to capture and correlate the information our client needed.  While this may be a common scenario, it is not one that is easily resolved. Extensive knowledge of PowerShell scripting and the available Office 365 administrative interfaces was required.

Exchange Online Remote PowerShell was used to get a list of all the mailboxes and the last login date and time.  Azure Active Directory PowerShell was used to get a list of licenses applied to each mailbox captured from the Exchange Online Remote PowerShell.  Finally, all the necessary information was exported to a text file, so it could be manipulated in Excel.

This solution allowed our client to determine the number of licenses they needed and insured that they were not paying for licenses they didn’t need.

Learn more information about this PowerShell script and how you can utilize it for your business needs. 

Three Questions that will Keep Your Projects on Track

Three Questions that will Keep Your Projects on Track

Team project status meetings are a time to gauge the temperature of a project, find out how things are going, and identify what still needs to be done. Ultimately, the goal is to understand the progress of the project and take corrective action if required. Using a simple Scrum work management practice in a daily standup meeting can improve project communication and management.

Why Daily? Short daily meetings allow you to quickly close the loop on issues and problems that occur during the project. Given the size of the teams involved and complexity of the project, weekly or biweekly may be more appropriate.

Why stand up? It’s been observed that meetings that occur while the attendees are standing move more quickly to conclusion than those where the participants sit down. A typical standup meeting is 10 to 20 minutes.

While projects vary in size and complexity, there are three simple questions that each team member should be asked to get the pulse of even the most complex project.

1.      What has been completed in the last period?

2.      What will be completed during the next period?

3.      Are there any obstacles or blockers to be addressed?

The first two questions are designed to be brief, concise explanations of what has been or will be delivered, instead of a long explanation of how or why.

The third answer should also be a brief, concise identification of obstacles. Be sure to setup discussions outside of the standup meeting to address obstacles and blockers.

You may find that initially the team is quick to launch into issue resolution. Remind the team that the standup meeting is the time for everyone to share what they are have accomplished and what’s next. Document the issues and, if necessary, schedule specific problem solving sessions. 

If you are not already doing so, give this daily standup approach a try. With practice, this approach will make for a more cohesive team, better communication and improve project outcomes.

The Complexity of Managing User Passwords

Leading businesses are constantly implementing new technologies that help make their companies run more efficiently. One great example of this is the ability for users to perform self-serve password resets; which enables users to reset their passwords without calling IT. 

This new technology has been deployed to:
      •    Reduce the workload on IT
      •    Enhance end user satisfaction by enabling self service
      •    Increase effectivity by removing barriers to work outside of normal business hours
      •    Allow for implementation of harder passwords without the fear of breaking users

As business capabilities progress, we as professionals need to ensure that we have kept up with potentially unintended consequences of new service offerings.  Password self-resetting and how companies handle employee exits are perfect examples of this.  

Previously some organizations merely changed a user’s password when the user exits the company. While this may be a traditional method of the past, password self-resetting requires that we change this process and ensure that users are disabled.  This will ensure that a former employee cannot simply change his/her password and get back into the network. 

ZAG has deployed user-password methods to organizations utilizing technologies such as Microsoft Azure and Dell Quest. These solutions have enhanced the user experience by making them more self-sufficient.

There are several key items that should be decided prior to selecting one of these solutions, such as:
      •    Should Administrators be able to reset their passwords?
          o    We generally don’t recommend this as it is an attack point that criminals may attempt to utilize.
      •    If Administrator self-resets are allowed, should they be informed of the change as a backup security function?
      •    Is the password tool mobile friendly?
      •    What methods of authentication are available?
          o    Text
          o    Email to an alternate address
          o    Security questions
          o    Etc.
      •    How secure is the system being implemented?

Workers today are productive 24x7.  Working at this pace drives increasing pressure on IT. Having self-reset password technology gives your IT one less thing to worry about. It enables IT to focus on adding value to the business and removes it from keeping the lights on by completing this mundane task.

 

 

Email Security-What’s in a Domain Name?

There is a trending epidemic related to your company's email security.  Criminals are setting up fake domains by doing things like replacing the letter “m” with “r n” in the domain name (i.e. example.com is replaced with exarnple.com).

Your company’s Exchange Administrator may take a shortcut and simply block incoming emails from these types of domains, but we believe this is shortsighted. This only protects your business from attacks coming in. Lack of aggressive action may cause threats of attacks occurring against your customer base.

Companies must be mindful of criminals using look-alike domains, or your customers may suffer the consequences. Fake domains could allow these criminals to steal money from your company and/or your customers. If money is stolen from your customers in this manner, the company-customer relationship will be negatively impacted, despite not having done anything wrong yourselves.

ZAG has had at least four clients hit by one of these attacks within the last three weeks alone. Fortunately, none of them have suffered losses from this, but there have been cases where they have come close to falling victims to this scam.

Companies need to be mindful of this threat. We recommend that businesses acquire registration for domain names that are similar to their own. We also recommend confirming ACH changes through multiple factors to achieve true financial security. 

This risk is real and must be addressed immediately. And though this may be an outside-the-box approach, we feel this solution can greatly protect you. If you want to be secure, you must stay vigilant.

To learn how to obtain IT security, contact a ZAG representative today. 

How Business Continuity Saved Us From A Tech Disaster

Water leaked from the floor above and damaged our most critical servers. 

At ZAG, we have been helping companies prepare for, and recover from, tech disasters for many years. This week was the first time that we personally experienced one. We learned (and relearned) many lessons in this disaster, and the goal of this posting is to share some of those lessons.

We were made aware of the situation when our network immediately shut down.

On Monday, the suite above us, which is going through construction, had their air-conditioning system freeze up causing an enormous amount of water to flow into our server room. As luck would have it, the water hit our most critical corporate operating systems.

Essentially, gallons and gallons of water poured down from the ceiling into our critical systems. The water event damaged our Number Four rack in a top down method; Switches were a total loss, servers were dramatically impacted, the SAN was slightly impacted and UPS's made it unscathed.

The first obvious lesson highlighted is that business continuity is as important as backups. Business continuity is key to business survival, especially during a technical emergency.

First, our voicemail system went down. This system handles the routing of our support calls coming from our clients. Fortunately, we have planned for such situations by having a system in place with AT&T whereby all incoming calls would be redirected to a different number in the event that our phone system was not reachable. This enabled us to continue to support our clients even without a phone system.

Fortunately, the water damage happened after hours, so the vast majority of incoming calls were support related. This meant the load of incoming calls to the ZAG PRI were not overloaded. We continued operating and supporting our clients quickly due to Business Continuity Planning.

Our second lesson came through our vindicated Data Center Design. If our backups had been in the same rack as our servers, then the experience could have been much worse.

The thought of losing a single rack, which is what happened in our case, may often not be thought of while planning a Data Center Layout. ZAG has placed all backup servers in a different location; this ensured that the backup server was protected from the localized disaster we experienced.

Lastly, the final lesson we received was the power of virtualization. Had our key systems not been virtualized, and taken the damage that several of our virtualized hosts did, we would have been down for much longer.

We completely lost three HyperV servers and the motherboards were destroyed due to the leaking water. However, we greatly benefited from the fact that we live in a virtual environment, and our SAN only suffered minor damages.  Thankfully, we have enough virtual hosts to bring up our mission critical servers and keep the business running.

Our disaster this week was real. The damage to our systems was great. Nevertheless, we had Business Continuity practices in place alongside recovery methodology, which helped us successfully weather the “storm” without a significant loss in service.

Office 365 Photo Availability Issue - Part 2

Office 365 Photo Conversion and Import

This post is Part 2 in the series on converting and importing user photos for availability in Office 365.  This post will cover the generation of thumbnail photos for on-prem AD, import of photos to Exchange Online as well as commands that can be used to extract and view the photos from both locations. To review Part 1, please click here.

Office 365 Photo Workflow

In the previous post, we walked through how to take a base64-encoded string and convert it to a JPG for import into AD and Exchange Online.  Next, we’ll generate the on-prem AD photo thumbnails and import them to Exchange Online.

Generate 96x96 AD thumbnailPhotos and import to on-prem Active Directory

User photos in the on-prem Active Directory are stored in the thumbnailPhoto attribute for each user.  This attribute has a size limit of 100Kb, so all photos imported must be smaller than this.  The photos we’ve just converted from base-64 to JPG may be high-resolution photos, so some will be much larger than this and will fail to import.

It’s typically easier to use PowerShell to resize a photo according to a pixel ratio rather than a file size, so after some experimentation, it appears that resizing to 96x96 pixels will keep the file sizes under 100Kb.

To resize the photos, we’ll need the PowerShell Image Module located here:  https://gallery.technet.microsoft.com/scriptcenter/PowerShell-Image-module-caa4405a

After following the instructions to set up the module, we can “activate” it by using  Import-module Image.

The next thing we’ll do is create an array out of all the JPGs in the c:\photos directory, and add that to a variable we can manipulate, like so:

[Array] $LGPhotos = get-childitem c:\photos | select-object name | where-object {$_.name -like "*.jpg"}

Then we just iterate through the array with a ForEach loop, passing each photo through the image filter and adding *_AD.JPG to the name of the photo.  This way we can differentiate between the thumbnail photos we will import to the on-prem AD and the hi-res photos we’ll import to Exchange Online.

      ForEach ($CurLGPhoto in $LGPhotos)

           {

             $image = get-image $CurLGPhoto.Name

             $imagename = $CurLGPhoto.Name -replace ".jpg$", ""

             $image = $image | Set-ImageFilter -filter (add-scalefilter -width 96 -height 96 -passthru) -passthru

             $p = "c:\photos\" + $imagename + "_AD.jpg"

             $image.savefile("$p")

           }

And now we have 96x96 thumbnails of all our hi-res photos!

To import the photos to the on-prem AD, we re-use a bit of the script with the {If…Else} statement that checks for photo content in the CSV file, set our user variables, and follow that up with the commands to read the JPG into memory and import it into AD:.

In this case $CurADUserName is the primary email address from the CSV file, and $CurADPhotoPath is the full name of the JPG that has *_AD.JPG in the name for that user.

$CurrentADUser = Get-ADUser -f {samAccountName -eq $CurADUserName} -Properties SamAccountName

$PhotoData = [System.IO.File]::ReadAllBytes($CurADUserPhotoPath)

Set-ADUser $CurrentADUser.SamAccountName –replace @{thumbnailphoto=$PhotoData}

We can see from the above that we’re matching each user’s on-prem AD SamAccountName with their primary email address from the CSV file minus “domain.com”.

And presto!  All users now have photos imported to Active Directory.

Import high-res photos to Exchange Online

The last stop on our adventure tour is to import the high-resolution photos we converted earlier to Exchange Online.  As specified before, Exchange Online is the authoritative source for Office 365 photos.

By now you’re familiar with the code we’re using in the {If…Else} statement earlier to iterate through the CSV file and look for user photo data, so we won’t repeat that here, except to say that the $CurExUserName variable is the user’s primary email address from the CSV file minus “@domain.com”.

The command to import the user photos to Exchange Online is Set-UserPhoto.  First we get the UPN for the current user by doing a Get-ADUser, we select the UPN for use in the import, and we read in the photo content as follows:

$CurrentADExUser = Get-ADUser -f {samAccountName -eq $CurExUserName} -Properties SamAccountName,UserPrincipalName

$Exphoto = ([Byte[]] $(Get-Content -Path $CurExUserPhotoPath -Encoding Byte -ReadCount 0))

Set-UserPhoto $CurrentADExUser.UserPrincipalName -PictureData $Exphoto -Confirm:$False

Photos are immediately available via OWA and Outlook in Online mode (Windows only), usually within 30 minutes for Lync (2013 for PC only), and within 24-48 hours for other clients.

Some final thoughts…

Importing photos is all well and good, but let’s say for some reason that you want to be able to view the existing photos in Exchange or AD.  Isn’t there a facility for doing that?

Well, the answer is “yes, but…” (you knew that was coming…)

User photos cannot be directly viewed in on-prem AD or Exchange Online except through the clients mentioned above.  However, with a little reverse-Kung-Fu we can take the commands we used above to import the pictures and read them back out to the file system.

So, to export the existing photo for an on-prem AD user, we can use this bit of script. Just replace “USER” with the SamAccountName of the user whose photo you want to export.

$User=GET-ADUser “USER” –properties thumbnailphoto, SamAccountName

$Filename='C:\Photos\' + $User.SamAccountName + '_ADExport.jpg'

[System.Io.File]::WriteAllBytes($Filename, $User.Thumbnailphoto)

If a photo exists for that user, you should see it in the C:\Photos directory.  If a photo does not exist, you’ll get an error that the “value cannot be null”.

Likewise, to export a photo for a user stored in their Exchange Online mailbox, we can use this.  Just replace USER@DOMAIN.COM with the email address of the user:

$user = Get-UserPhoto USER@DOMAIN.COM

$user.PictureData |Set-Content "C:\Photos\ExPhoto_$($user.Identity).jpg" -Encoding byte

Once again, if a user has a photo, it will now show up in the C:\Photos directory.  If a user does not have a photo in their Exchange Online mailbox, you’ll get a PowerShell error that “there is no photo stored here”.

I hope this information is helpful!

<The above is informational in nature. ZAG does not warrant the above for the reader's specific environment. Please contact us with questions or if you would like to engage us to implement this solution in your environment.>

Author

Loraine Treadwell

Consultant

ZAG Technical Services, Inc.

Office 365 Photo Availability Issue - Part 1

Office 365 Photo Availability Issue Summary

Having user photos available consistently across the Microsoft Office 365 offerings greatly enhances the user collaboration experience.  Unfortunately, when a customer has a Hybrid Exchange / Office 365 implementation, directory synchronization is typically insufficient and unreliable in more advanced environments.

Sourcing the user photos through directory synchronization currently has the following limitations:

  • Photos are typically uploaded via batch file to the on-prem thumbnailPhoto AD attribute, which has a limitation of 100Kb.

  • When these photos are synchronized to Office 365 via directory synchronization, they will remain low resolution photos.

  • Once a photo has been set for an individual user, directory synchronization does not update the photo again, even if it has been modified through the on-prem AD.

In addition, if a customer wishes to take advantage of a third-party data service such as WorkDay via Okta, often times these third party solutions cannot handle photo replication appropriately, if at all.

Therefore, another method is typically required in order to import low-res photos to the on-prem AD directory while simultaneously utilizing hi-res photos for Office 365 services.

Office 365 Photo Workflow

Exchange Online is the authoritative source for photos accessed through Office 365.  The photos are stored in user mailboxes are either accessed by all O365 services from the mailbox or propagated out to the other services, depending upon which service is in question.  Lync accesses the photos directly, while SharePoint replicates copies and generates its own set of hi-res photos.

Exchange Online is also capable of storing high resolution photos that are much larger than the 100Kb on-prem AD attribute limit.

In general, photos uploaded to Exchange Online are available sometime between immediately (in the case of OWA, Lync (Windows) or Outlook (Windows) in Online (not cached) mode).  Photos can take up to 24-48 hours to become available through other clients (Outlook cached mode (Windows only), Lync for Mac).

Photo Conversion Solution Summary

This section will review the major functions used to convert the photos from base64 and the basic commands used to import them to Exchange Online.

For the full script and documentation on generating the on-prem AD thumbnail photos and importing the photos to Exchange Online, please contact us.

Import the CSV containing the photo data

With many cloud hosted HR systems, organizations may receive photo data via a weekly CSV file which includes a base64-encoded field containing the photo data.  So the first step is to convert the photo data from base64 back to *.jpg. 

For this we use a little PowerShell magic.  First, we utilize the Import-CSV PowerShell cmdlet to pull the CSV into memory.  A nice feature of Import-CSV is that it automatically sets the headers of each column as a separate variable name.  So we can move directly to a ForEach loop to handle each photo field.

               Example:  $ImportCSV = Import-CSV $FileName

While we’re at it, we can set a directory to send the converted photos to:

$FileExportDir = "C:\Photos\"

Next, we can set a ForEach loop to handle each line in the CSV.  This sets the variable $Line to represent each line in the imported CSV:

     ForEach ($Line in $ImportCSV){

The header in the CSV file which contains the base64-encoded strings is attachment_Photo_Content, so we specify this to set the variable for each line that contains a photo:

               $CurUserPhoto = $Line.attachment_Photo_Content

We also specify a couple more variables to pull each user’s email address from the CSV as well as specify to add *.jpg to the filename.

$CurUserName = $Line.primaryWorkEmail -replace "@DOMAIN.com$", ""

$CurADUserPhoto = $CurUserName + ".jpg"

Then we tell the script to check for the presence of photo data for each user (not all users will have photos)

If (!$CurUserPhoto)

If the photo does not exist, as indicated by the “!” in the If statement above, the script will simply report a photo is missing and move on to the next line in the CSV.

Convert the photos from base64 to JPG

If, however, a value in the attachment_Photo_Content field does exist for a specific user, we need to convert it from base64 to a more appropriate format we can import, like JPG.

To do this, we summon our .Net Kung Fu as follows:

[Convert]::FromBase64String($CurUserPhoto) | Set-Content -Path ("$FileExportDir" + "$CurUserName.jpg") -Encoding Byte

And bingo!!  If the base64 string contained in the CSV file is valid, we’ll have a picture named USERNAME.JPG in the C:\PHOTOS directory.

Generate 96x96 AD thumbnailPhotos and import high-res photos to Exchange Online

For information on generating the smaller 96x96 on-prem AD thumbnailPhotos and importing the converted hi-res photos to Exchange Online, please see part 2 of this post, coming soon.

The above is informational in nature. ZAG does not warrant the above for the reader's specific environment. Please contact us with questions or if you would like to engage us to implement this solution in your environment.

Author:

Loraine Treadwell

Consultant

ZAG Technical Services, Inc.

How many times has your personal information been hacked?

We hear about data breaches in corporations often. In fact, they've become so regular that we no longer respond to them in the way we should. The outrage is gone.

The New York Times recently did a great service. They have an online form that can give you a feel for how often you've been hacked and what information may have been lost. This list is by no means conclusive, but it is significant to see the impacts of these hacks. Check it out here:

http://www.nytimes.com/interactive/2015/07/29/technology/personaltech/what-parts-of-your-information-have-been-exposed-to-hackers-quiz.html

We'd encourage you to take a look at it and see what the potential impact on you may be.

As security experts, we can help ensure that your network is protected from attacks. Please reach out to us today to discuss your potential vulnerabilities.

Thank you New York Times for raising awareness on this!

The Microsoft Software Asset Management Letter

Over the past year we have noticed many more customers receiving Microsoft Software Asset Management letters. Many customers report that these feel like a Microsoft Software License Audit. These letters ask the customer to confirm their accurate use of Microsoft software. Since many clients have acquired Microsoft licensing many different way over the years, it is frequently difficult for a client to easily determine what they own and what they are using. These engagements are known as SAM letters.

SAM (Software Asset Management) is a function within Microsoft that focuses on customer compliancy. As Microsoft’s business evolves to be more cloud focused, users with on premise licenses are subject to a software inventory audit. SAM is normally the first entrée to the audit process. This post is to demystify SAM and quell the angst and concern associated.

It’s time we move past SAM with the negative connotations and into what it is meant to be, a software asset management lifecycle program. In doing this, organizations can understand their licensing perspective and rest assured that they are properly licensed on an ongoing basis.

It is our belief that organizations that are under licensed are in such a situation through error, not on purpose. Licensing is complicated after all. This confusion is where errors happen. 

Organizations need a true life cycle plan. We need to move past SAM as an audit and into SAM as a program. ZAG realized offering a SAM service to customers is a value add and learning experience to minimize risk to your organization thus we are now officially SAM certified with Microsoft. Not only do we have the ability to assist customers through a licensing review, we can assist them to ensure their licensing is correct on an ongoing basis. Only in this way can your organization be comfortable that you are correctly licensed.

Please feel free to engage us today to learn what a SAM offering looks like and how it can benefit your organization.

ACH Phishing Fraud Risk and how Finance can fix it

There is a problem with ACH phishing fraud that is not widely acknowledged. It is a security flaw that is affecting organizations today. Criminals are using it to steal from legitimate businesses.

The flaw isn't necessarily with ACH itself, but rather with how organizations manage it and communicate about it. Organizations rely on email too much today. We have to realize that email is not fundamentally secure. Email security is not 100%. Anyone can spoof an email address and commit ACH phishing fraud.

What this means is that criminals can impersonate a sender. While doing this, they can send out fraudulent ACH information to get a customer to send a payment destined for you, to their account. By using an email phishing attack, they can do this in a way that your network is never touched. You can have your customers be a victim of this even if you are 100% secure.

Yes, there are methods to secure customers from email phishing, and ZAG often consults with organizations to do just that. But there is little that your IT security team can do to make sure that your customer's email platform is more secure.

Ultimately we need to move away from relying on IT security to provide ACH security. Organizations need to implement a second factor of validation of an ACH change. Instead of simply accepting an email informing of an ACH change, customers need to be told that they should validate this with a second factor of communication. Finance should inform their customers that they should call either Finance or their Sales Rep to validate any changes to ACH routing information. Ultimately, this should be put into your Terms and Conditions with the customer.

We encourage IT security to reach out to their Finance Departments to put these rules in place. Tell your customers to call to verify any ACH change. It is the only way that you can prevent the risk of ACH fraud through email phishing.

Again, ultimately the fraudulent ACH email may have never touched your network. However, if your customers lose money by impersonating you, it will dramatically and negatively affect your relationship with your customer. It may ultimately ruin that relationship and cost you significantly.

Finance must step up and put in place security steps to secure the ACH infrastructure from this kind of risk. They need to know of this problem and not rely on IT to solve it.

 

Agriculture and IT

Today Forbes hosted the AgTech Summit, which is part of their Reinventing America series.  This was a very powerful summit with a great deal of information.  The conference was held in Salinas, CA which is ideal given the Agriculture background and closeness to Silicon Valley.

The future of technology in agriculture will be critical. In fact, Ag is expected to be one of the largest users of the IOT.  Drones for agriculture is quickly going to outgrow the use of Drones for Military.  And most importantly, the forecasted population growth will require dramatic technological improvements to keep up with the needed food supply.

The Salinas and Silicon Valley pairing will be ideal for this dramatic need.  It is great to see the alignment brought into the forefront through such great programs as the Forbes #reinventingamerica.  The effort underway by both startups and the largest established firms is awesome.  ZAG is honored to be a part of this effort and industry.

We need to move past Backups. We need to move past Disaster Recovery….

IT organizations need to grow past talking about backups.  We even need to grow past talking about Disaster Recovery.  We need to mature to the point of talking about Business Continuity.

This isn’t to say that backups aren’t important.  We need to have them.  We need to be able to recover from a local data loss.  If a SQL server becomes corrupt we need to be able to restore and recover to the latest point possible.  Remember, most issues that face a company aren’t disasters, they are localized losses that can be overcome with a great backup solution.

This isn’t to say that Disaster Recovery plans aren’t critical to an organization.  Any enterprise needs a solid Disaster Recovery plan.  If it doesn’t have one, the Board should be asking questions.  Disaster Recovery plans are complex and multi-faceted.  Organizations will use many methods generally to replicate the data.  They will use storage based replication, VMWare or Hyper-V replication for some virtual machines, transaction log shipping for SQL to keep it as current as possible.  Many tools will be used, all in an effort to reduce RTO and RPO.

Backups and Disaster Recovery are important of course.  Ultimately, IT needs to mature to a point of moving past them and into a real Business Continuity discussion.  No matter what the SLA that is established for an RPO and an RTO, these times will ultimately be extended if DR hasn’t been talked through with the business leaders ahead of time. 

The move to DR is a major move with significant costs.  Moving back is often a big deal. Any declaration of a DR solution is not done lightly as the move is not trivial.  The business leaders in the organization need to be brought into the establishment of the rules around the declaration of a disaster well ahead the actual incident.  They need to understand how to operate in a DR situation without the systems put in place.  Trucks still need to ship, products need to be produced and service still needs to be rendered during the outage.

The business will need to know how to recover from the implementation of a disaster recovery plan.  How will they know what product has been processed?  Will they need to do a close on the plant, inventory and start a new day in ERP?  How will the reconcile what data is or isn’t in the system? These plans need to be thought out.

The IT team will do everything to remove any single point of failure, but to not plan for a complete disaster is short sited.  The business needs to assume a disaster will occur.  If the job of planning for a disaster is an IT exclusive function, the plan will fail.  The development of this business continuity plan must be a business function.  IT can ensure that what is required is achievable in the correct budget, but the business leaders need to be a driving force behind this plan if it to be successful.

A DR plan without the business leader’s full involvement and buy in will fail.  IT must grow past DR.

Hackers use Ads to Capture your Clicks

Beyond email phishing to trick unsuspecting email users, hackers have also been buying online ads on popular websites to capture your clicks and infect your computers.

This blog shows how organizations should work to combat these attacks.