The Network Manager at Westminster School presents solutions to sticky problems...

Thursday 5 November 2009

How to create a Software Publisher Certificate for internal use from a CA

Not that I am suggesting that there is any conspiracy to make money from hapless IT departments wishing to sign their internal applications, but...

Apart from purchasing a Software Publishing Certificate (SPC) from a commercial and recognised issuer there is a way to create a SPC using free tools and the PKI infrastructure in your organisation. This does have the disadvantage that you can only verify the application on PCs that have your organisation's root certificate installed in the Trusted Root Certificate Authorities partition. An easy thing to do with Group Policy, and also possible for non-domain computers that really require it without heavy lifting.

Whey would you want to do this? Anyone wanting to distribute custom applications or custom installers can now distribute their internally compiled software without reducing security levels. Want users to install an application on Vista or Windows 7 from the control panel when they need it? If it is not signed, it will not happen without a certificate.

So, how to do it? Requirements: An Enterprise CA (running on Windows Server 2008 Enterprise - important as you need to manage certificate templates - not possible on Standard editions,) OpenSSL, PVK from Code Sign here, and Microsoft Platform SDK (which, if you are developing, one assumes you are using.)

Another thing you need to identify is what you want to appear when you install an application and the UAC pops up. Typically, you will wnat your users to identify you as the source of the software, so somehting like "Company Name IT Department." I will refer to this later as "your certificate name."

On your CA you will need to right click Certificate Template and select Manage. In the templates Console you will need to create a code signing template. For our purposes I created a 2008 Enterprise Certificate with the extended purpose of Code Signing. Under request handling, "Allow private key to be exported" MUST be ticked. Under Subject Name ensure "Supply in the request" is set. Under Issuance Requirements ensure "CA certificate manage approval" is ticked. Click OK and close the Template Console.

In the CA Manager, right click Certificate Templates and select new -> Certificate template to issue. Select your newly created template.

Open a command prompt and run certmgr to open the certificate manager for the user account. Right click personal and select All tasks -> Request new certificate. Click next until you see a list of certificates you can enrol for. You should now see your new certificate template with a warning "More information is required..." (because you selected the "Supply in the request" option.) Tick this certificate template, expand and click on Properties. In the Subject tab, select "Common Name" from the drop down list and enter your certificate name as noted above. Click Add. You may want to identify this certificate later by added a friendly name and description under the general tab. When finished, click OK. Click on Enroll and you should be told that the request is pending.

In the CA Manager, note the Request ID (important for later) and approve the request. We are not finished with the CA Manager yet...

In certmgr, refresh the tree and you will have a Certificate Enrollment Requests branch. Expand this to find a certificate with issued to "your certificate name" (as noted above) and issued by "your certificate name" with a private key, specifying a password. This is practically useless, but we need to to get the Private Key. Export the certificate with the private key to a pfx file. Use openssl to extract the private key to a PEM file:

openssl pkcs12 -in filename.pfx -nocerts -nodes -out filename.pem

This extracts the key after specifying the password noted above. The certificate is useless to us, so once you have the PEM file, you can practically dispose of this certificate. Now you need to convert the PEM to a PVK file.

pvk -in filename.pem -topvk -out filename.pck

You will be asked to specify a new password for the private key and to re key it to verify. This password will be needed later to sign installers - so remember it.

Because of the bug in the certificate (it being issued by itself and therefore untrusted) you now need to retrieve the certificate from the CA. In CA Manager, go to Issued Certificates and list them in Request ID order. Find the Request ID (noted when you approved the request) and double click the request to view the certificate details. Note that this version of the certificate has no private key and is issued to "Your certificate name" but is correctly issued by your CA. This is what we want. On the details tab, click the "Copy to File" button to begin the export wizard. Save the certificate as a DER X.509 certificate with a .spc extension. (A .cer extension will do, but for the purposes of identifying this file later, let's just give it .spc)

Okay you now have a .spc certificate with code signing extensions, and a .pvk file that corresponds to the .spc certificate. In order to turn this into something useful, you now need to use the Platform SDK Tools:

pvk2pfx.exe -pvk filename.pvk -pi password -spc filename.spc -pfx newfilename.pfx -po password

The password corresponds to the important password you noted when creating the pvk file. I would keep the second password the same. If you import this pfx file you will now have the correct certificate, with the correct extension, subject and issued by, for signing your installers.

Finally, for ease of use, I use Tech-Pro's codesign from within Visual Studio's tools menu to do all my signing (because it works.) Of course you may use signtool and the command line or any other method using this certificate. Now all my installers are signed and can be used within the organisation without the need to reduce security AND I can continue to distribute them via active directory just as I do with commercial applications.

Tuesday 6 October 2009

SharePoint Mysteries

SharePoint can do some pretty weird stuff. One day everyone is publishing quite happily. The next, they cannot add a new page, and some page properties give errors. They are normally presented with an "Error: Access Denied" page even though they are presented with the link to create a new page. While we have not traced what causes this, we have been able to fix it. The problem lies with Page Layouts and Style Libraries.

To resolve the issue, you will first have to log into the site as a site owner. Go to the top site and then Site Actions, All Site Settings, and select Master pages and page layouts. Go to Document Library Settings and click Permissions for this document library. Now add the group that you want to be able to create pages to this list and select at least "Restricted Read."

Now go back to your top site and select "View All Site Content" and select "Style Library." Once again go to Document Library Settings and click Permissions for this document library. Again add the group that you want to be able to create pages to this list and select at least "Restricted Read."

Your users will once again be able to create pages. Keep an eye on this, because we have suspicions that there is some automated process that clears the permissions for these libraries, possibly an update.

Monday 28 September 2009

Windows 7 will push back the tide...

When Windows Vista arrived, everyone had a good excuse to go to Linux. The hardware specifications were hard on old hardware. Even PCs that were just above specification they lagged quite badly if you ran too many applications side by side. Ubuntu, though derided by Linux die-hards, has brought Linux to the masses. Easy installation and hardware compatibility makes installing Linux on a 2-3 year old laptop a breeze, and a viable option against staying with a compromised, buggy XP or moving to the lagging new secure Vista.

However, if you were expecting to see blistering speeds, forget it. There was a time when you could load Unreal Tournament on a like for like Linux PC and push all the sliders to the right, when you could only go half way on Windows. While gaming performance may still be better, the desktop certainly lags. KDE 4, shipping with the latest versions of Ubuntu provides a nice shine, but does not impress with program launch times. Even Gnome, now the flavour of Linux, does provide a measure of speed, but is so unfamilier to traditionally Windows users. It still lags when launching programs. Add to this the difficulty in managing Wireless connectivity.

After 9 months of KDE 4, I finally decided to go back to Windows. With the release of Windows 7, I was impressed with the apparant lightness on a low processor, 1Gb RAM laptop. Vista is do-able on that specification, but it lags and makes your hardware feel like a dog. Windows 7 provides the response time not far off how it felt when you first loaded XP (before all the addons and start programs are added to drag XP into the quagmire.) Windows 7 was installed in around 30 minutes. Hardware drivers were mostly installed within 10 minutes (after running a Windows Update) and one driver in particular took some hunting down (the lastest sound drivers did not work, I had to use an early driver.)

All in all, it was quicker to install Windows than it was to install Linux. And the wireless, once configured, just worked, without constant wallet passwords or dropping out after hibernation. If this is the way Windows is going, there is no reason for the average person to turn to Linux any more, other than the price.

But then again in the business of life and business, would you not rather pay for something that works out of the box? Or would you rather spend time on getting a free product working. Time is money, you know. How much does Linux really cost?

Wednesday 22 July 2009

Migrating to vSphere (VI4 for those that know)

Migrating to vSphere 4 from VI3 is a very easy process. There are a, however, couple of gotchas you need to look out for.

But first, are there many good reasons to upgrade from VI3? There are some nifty feature additions in the background: Virtual IDE controllers and USB may be a good reason to upgrade. And of course there is better 64 bit support. Add to that a greater range of settings and a sparkling new interface for vCenter, making administration a little easier. Permissions are scaled out to all objects. And, of course, there is the vNetwork Switch (an expensive step up to Enterprise Plus.) This makes your networking datacentre based, as opposed to machine based, which means your switch ports migrate with your VM, reducing packet loss and maintaining connection integrity.

And now the Gotchas:

vCenter Update Manager now works differently. No longer are the Hosts defences switched off in order to scan or run updates. Instead, the hosts communication with vCenter over port 80 to collect update definitions. vCenter proxies this to the Update Manager, running on whatever you set it to on installation. Of course, if you have a proxy server set on the server, you need to make sure that the local IP address of the Update Manager is excluded, or you could be spending the day trying to figure out why updates do not work...

Setup of the client tools is the shakiest yet. For a start, the new version of the tools requires a complete uninstal of the older version. This in itself is not a bad thing, unless the previous version cannot uninstall itself without leaving parts of itself all over your server. Look up VMware KB Article 1001354 (http://kb.vmware.com/kb/1001354) you will get very familier with it. After attempting an uninstall and falls over in spectular failure, run through the KB article, removing all the remenants of the previous installation. Then run setup64.exe to install the latest tools. A reboot is required. It's possible that trying to uninstall 32bit tools and then install the new 64bit tools. Hopefully the next upgrade will not be so painfull.

Finally, after completing the upgrade, put each of the hosts into maintenance mode and watch for VMs that refuse to migrate. One of the reasons for failure to migrate is hardware. vSphere 4 introduces more hardware virtualisation, such as USB ports. If you previously P2V'd a physical machine or a VMware Server VM, with VI3.5 it ignored the USB ports. vSphere 4 recognises the USB but does not quite integrate the original settings and therefore cannot migrate the VM. For these VMs, a simple edit of virtual hardware will resolve these problems. But you want to find these out sooner rather than later.

Happy migrations...

Tuesday 14 July 2009

Google Chrome, a new OS?

Hmmm. "Introducing the Google Chrome OS" states the headline on the official Google Blog for July 2009. Other's hail the end of Microsoft as we know it as a 'New OS' comes on the scene. Oh please! Of course Microsoft will adapt to changes in the market. But will Microsoft fall to something that's been around for years?

A careful read of the above blog will reveal that the Google Chrome OS is "Google Chrome running within a new windowing system on top of a Linux kernel." Google Chrome is, of course, based on an Apple WebKit. Effectively, it is Safari in another guise. So, for the new recipe, take one Apple Web Kit, stick it as the sole application on a new Desktop Manager (see Gnome or KDE as examples of a desktop manager), and bake it on Linux. Bottle with pure hype and brand it "Google!" Does anyone see Microsoft running scared from this?

The technology is old, but the difference will be how Google packages the technology. And that will be what Microsoft has to be wary of!

Tuesday 7 April 2009

nslookup and DNS on Server 2008

Testing your new Server 2008 DNS server and finding that the primary domain returns nothing but root server referrals? We did. Using nslookup requires an extra command if you are running this test on a client with a default domain name:

set srchlist=.

If you do not set this, the client appends its current domain to the search query. The response is that the server has not a clue what you are talking about and returns a root server referral (which, by the way, is the correct behaviour.)

It is noted that this is not the same behaviour for a Server 2003 DNS server.

Friday 3 April 2009

Updates with attitude - Exchange with Forefront and VMware

We stumbled across these. I hope you can avoid the same issues.

Firstly Exchange with Forefront Security...

Symptom:
When you reboot your Exchange 2007 SP1 Hub and Edge role servers where you have Forefront for Exchange installed, you find that no mail is flowing.

Cause:
Some Forefront Services do not pull themselves up in time after rebooting. Consequently the Exchange Transport service which relies on Forefront being up and ready for action, fails to start. A manual start of the Exchange Transport service works with no issues.

Solution:
Remember to manually start your services after rebooting! There is rumoured to be a fix for Forefront in Update Rollup 4 or SP2 or both... See
here Don't hold your breath.

Secondly VMware Whatsitcalled?

You may be aware that VI 3.5 update 4 is out since the end of March. For owners of VI 2.x or 3.x (other than 3.5 in any form) there are clear instructions on how to upgrade your infrastructure. It is worth following these to the letter. However, if, like us, you were running VI 3.5, the instructions end up being anything but clear.

First thing to note in all of this is that is that they have changed the names of their products. See
here. VMware is now being very Microsoft-esque by sticking v next to a bunch of previous used names and calling products 'server' when in fact it is just an over hungry application. Nevertheless, get used to vCenter Server instead of Virutal Server and vConverter... well you get the drift.

So how to update to VI 3.5 update 4?

The first step is to download the VMware vCenter Server 2.5 Update 4 - includes Converter Enterprise (formerly VMware VirtualCenter) from the VMware download page. After doing all possible backups, run the installation and install vCenter 2.5 update 4 along with the latest version of the VI Client and Updater.

The second step is to update the ESX Hosts. This is where the documentation breaks down. I thought that the old ways of copying binaries and using boot disks was gone... Well, it is. Stop looking for esxupdate.zip, that for VI 3.0.x. Instead, open the VIC and install and enable the updater plugin. When you scan for updates you will now see an extra dozen or so updates that you could not see previously. Once you have applied these, you will note that the build number for your ESX servers goes up to 153875. Welcome to Update 4.

The final step is now to go into each VM and update to the latest VMware tools. How do I know this is the way to go? Check out the actual update for VI 3.5 update 4. It is downloadable from the VMware Downloads site. It is simply a xml list of updates that need to be applied to make Update 4. There are no binaries with the update itself.

Happy updating!!!

Monday 16 March 2009

IPv6 Bites...

Since my talk on IPv6, it has since come to my attention that IPv6 is in the top 5 urgent agenda list for President Obama's incoming Commerce Secretary. The details are all over the internet even if you do not search very hard. One such article can be found on http://tiny.cc/FINMg It seems the ongoing concern is that someone out there will be issued the last remaining IPv4 address sometime in the next 3 years...

Despite all this, our latest internet link is about to be implemented with good old IPv4. I made the enquiry on whether we can get IPv6 details for this connection. The response?

"We're also in the process of moving to IPv6 ourselves, sorry for any inconvenience."

Nice to know we are not alone...

Wednesday 4 March 2009

IT Managers' Meeting at Westminster


In the spirit of Ian Yorston's talk on micro-blogging and social networking, all the resources for this event are going to be hosted "in the cloud." I thought to announce this move straight away on Twitter. I was therefore rather amused when I discovered that for a second day in a row the micro-blogging site had fallen off it's branch. Despite this knockback we shall soldier on.

As not everyone is on Twitter, and not following my particular Tweet, I hope that Ian will forgive me for using that conventional method of distribution; the Corporate Email.

The PowerPoint presentations will be hosted on my SkyDrive, that remarkable 5Gb of storage space provided to individuals for free as part of the Live package. The SkyDrive allows for both private and public documents. You can now take your presentation to your next meeting without a single piece of hardware on you. If they have internet, you can present it.

Of course, the SkyDrive is also a great place to share documents. You do this by adding documents to the Public Folder in your SkyDrive. It is then possible to use an amazingly long and complex link to share your files. Which leads me to the fourth and final piece of public air estate... http://www.tinyurl.cc/

A single SkyDrive link will overflow the 140 character limit of Blogger. (The links below were 144 characters in length.) It also looks horrendous. Go to Tinyurl and enter this complex link address and you are provided with a tiny url to replace the massively complex one. Not only can I now include these links in my Tweet, but my post here looks much neater. The URLs are practically permanent provided they are used. Unused URLs are deleted after a year of inactivity. If you have ever received an email with the URL cut in half, you will also appreciate the usefulness of Tinyurl.
The other sites mentioned in Ian's talk were:
  • Hootsuite
  • Facebook
And so to the other presentations:

http://tiny.cc/GbsMD VMWare - Server Virtualisation presented by Richard Hindley. This details Westminster School's implementation of and resoning behind using VMware to virtualise Servers and keep data safe.

http://tiny.cc/P7EsV App-v - Application Virtualisation presented by me. A lively discussion on Microsoft's idea of Virtual Applications and possibly the future of application delivery?

http://tiny.cc/vAOVx To VLE or not - On VLEs and MLEs presented by Ian Phillips. There is much more in this document than time allowed Ian to present.

http://tiny.cc/3Y7Ua Pupil Connectivity - This presentation was not shown in the end. Instead we had a discussion on the subject. This presentation details the thinking behind Westminster's implementation of a Cafe style wireless.

http://tiny.cc/bjApH IPv6 - A technical look at why we should keep an eye on IPv6. Presented briefly by me.

Enjoy...

Tuesday 3 March 2009

Exchange 2007, the GAL and Custom Address Lists

Exchange 2007 has new ways of approaching Address Lists. As a consequence, you may have address lists created in previous versions of Exchange that now seem imutable and not upgradeable. The Console returns with a most helpfull message advising you to use the -ForceUpgrade option. Rather than the LDAP filters used by previous implementation, Exchange 2007 now uses the OPath format.

As with many of the more complex tasks we must ditch the moderately functional Exchange Management Console and use the Exchange Management Shell. As a tip when working with any shell, I recomend that you use Notepad to create your command strings (which can get very long) and then cut and paste them into your shell window.

To create a new Address List you need to use the add-addresslist cmdlet. An exaple of this is:

new-addresslist -Identity Name -Container "\Parent Container Name" -RecipientFilter { ( ( property -eq 'value' ) -and ( property -like 'value' ) ) }

If you are looking at the list of address lists in Exchange Management Console, you will understand the properties above. To amend the filter afterwards use the set-addresslist cmdlet:

set-addresslist -Identity Name -RecipientFilter { ( ( property -eq 'value' ) -or ( property -like 'value' ) ) }

These property values are not the LDAP or AD equivalents, so you need to check http://technet.microsoft.com/en-us/library/bb738157.aspx for the appropriate property you are filtering against (for sp1.) If you are amending an existing pre-2007 address list, then here is where we use the -ForceUpgrade option:

set-addresslist -Identity Name -RecipientFilter { ( ( property -eq 'value' ) ) } -ForceUpgrade

If you do not include a new filter, but just the -ForceUpgrade switch, nothing happens and the filter remains in the old format.

You can create an Address List in the Console and then edit it using the Shell, but there is a gotcha: The address list must not use any of the standard filters. If they do, adding a Recipient Filter in the Shell will fail. So when you create the Address List, select the 'None' radio button in the "include these recipient types." Better still, skip the Console altogether. Typing into a shell is becoming a lost art. Be part of the revival...

Finally, Exchange 2007 sp1 still appears to have issues with the Default Global Address List. In our CCR implementation, updates to the GAL still do not occur at the scheduled times. Consequently we have to manually update the GAL every time we make an amendment. This is simply:

update-addresslist -Identity "Default Global Address List"

Job done!

Wednesday 25 February 2009

Installing Forefront Client on Windows Server 2008

Installing Forefront Client Security seems like a daunting task on Windows Server 2008. It can be done, but with a little extra help. If like me, you already have a WSUS server and you have less than 1000 clients, a two server solution is fine. The first thing is to read through the documentation found at:

http://technet.microsoft.com/en-gb/library/bb432630.aspx

However, it does appear that the technical manual department for the Forefront team did not want to spend much time with the documentation. There is a lot of repitition and a few things that are left out or are implied. We will go through and identify these missing instructions...

The first thing to notice is that there are separate instructions for installing on Server 2008. This are specifically for the 32-bit version of Server 2008. As noted in the hardware requirements, Forefront will not work on the 64-bit version of Windows Server 2008. However, these instructions correctly state that you MUST do everything in the order listed. For your own sanity, do so, while added the bits below:

The first missing/implied instruction is to complete the installation of .net 1.1 sp1. To complete the installation of .net 1.1 you have to wait until installing IIS. After installing IIS, you need to complete the .net 1.1 installation or the MOM components (I know, you don't have MOM) will fail and the entire installation will falter.

Step 1: Run the aspnet_regiis.exe /i command from the C:\Windows\Microsoft.NET\Framework\v1.1.4322 directory.

Step 2: Open IIS Manager. Click on the Server name. Under IIS in the main panel, open ISAPI and CGI Restrictions. Change "ASP.NET v1.1.4322" to 'allowed'.

The other is a peculiar problem with Reporting Services. You should check the following URLs to ensure that Reporting Services is installed:

http://reportservername/Reports
http://reportservername/ReportServer

If you get an error message amounting to a lack of permission on the installation account's part, then there is a way round this. This is, in fact, a UAC issue. (Please do not run to switch UAC off...) To get round this, go into the start menu and elevate IE by right clicking the IE link in the programs menu. Then go to the first URL. You now have a "Site Settings" link in the top right hand corner. Click it. Down the bottom of this page you now have a "Configure site-wide security" link. Add the installation account as a System Administrator and System User.

Checking the second URL now results in a blank directory list... Okay? You are now ready to complete the Forefront Client installation. (Remember that the installation will run elevated, so don't worry if you still get the error when running IE when not elevated.)

You may find yourself frustrated trying to run the distribution server installation on your WSUS server. SERVERSETUP.exe just seems to crash everytime. If you have WSUS v3 then you are wasting your time running the installation program. The Forefront Client installation for the Distribution Server is intended to fix WSUS v2 installations so that you can poll for updates every hour. WSUS v3 does this already as installed. Consequently, you do not need to install anything on your WSUS Server. Instead, you need to change the frequency of the synchronization to anything up to each hour of the day, depending how soon you want to recieve antivirus and malware updates.

Oh, and one more thing. The requirements state that a 32-bit installation of Server 2008 is required for the distribution server. Seeing that you do not need to install anything on the WSUS server with v3, you can still have your 64-bit cake there and eat it...

Finally, do not forget that to be truly compatible with Windows Server 2008, you need SP1. This is obtained through Windows Update rather than a seperate download. So get synchronising...

I still have some hair left.

Monday 23 February 2009

PCs need updates to accept Group Policy Preferences

Group Policy Preferences is a relatively new feature. Consequently, PCs or Servers that have not been updated may not pick up the preferences set on up-to-date administrative workstations.

For example: Suppose you wanted to apply registry settings to your 300 PCs across your site. You can use Group Policy preferences to do this very easily, even pulling the registry changes directly from the administrative workstation. However, some PCs do not appear to pick up the preferences. On closer inspection, they have not yet had the latest updates to Group Policy Preferences. This can happen if the PC was recently rebuilt and is awaiting the next update window.

After running Windows Update, check for the changes and, bingo, there they are...

Friday 20 February 2009

Getting App-v to work with a Publishing Server

The way Microsoft is heading with App-v, we will all be dealing with it in a few years time. One day, it will be the way all applications are delivered to your PC. An application living as a distinct entity, yet interacting with the resources of the host PC. It is like discovering pot plants after years trying to weed a garden. If you knew you could isolate each plant, yet share the sunlight, rain and shade of the garden, you would never plant another in the ground ever again unless absolutely necessary. Later versions will allow you to plant two pots side by side so that you can share the same resources between them...

The latest offering, version 4.5 already expands Softgrid's approach by offering extra flavours. Not only can you stream an application from the Publishing Server or from a Terminal Server, you can now create an MSI file that loads the Application directly into the app-v cache and run the application without the need of a Publishing Server. Integration is also possible with System Centre Configuraion Manager.

But there are downsides to this expansion of the possible ways to use app-v: They become incompatible with each other. You cannot have either MSI approaches with a publishing server. There are registry keys that mess with the Publishing Server approach. One is:

HKLM/SOFTWARE/Microsoft/SoftGrid/4.5/Client/Configuration/RequireAuthorizationIfCached


This needs to be set to 1 if the Publishing Server is to work as expected. User targeting or the application may not run for some users, though it appears in the start menu or desktop.

Microsoft recomend that you deply the client application first and then apply the registry settings to make the application tick. I would suggest using SCCM for this. Use of the Setup.exe is almost essential as there is a prerequisite application buried in the Setup file. Don't even bother trying to extract it. You might see then name of the prerequisite during setup; don't bother going to look for it. Though you may install Visual C++ 2005 Redist, it still wants to run it's own version. Attempting to deploying the MSI alone through AD results in miserable failure.

Now to deploying the registry settings. If you are in an AD environment, you may not want to spend the effort writing a host (or PowerShell) script to deal with this. After all, Microsoft gives you the Group Policy Preferences. This is a great idea that saves time. However, Beware! The key registry setting that will cause the Publishing Server to silently fail to deploy its applicatons is:

HKLM/SOFTWARE/Microsoft/SoftGrid/4.5/Client/Configuration/UserDataDirectory

Why? When you place the %appdata% variable into Group Policy Preferences, the machine expands the key when applying the preference. The result is an entry that refers to the appdata for the system account. This is unreachable for normal user accounts. The app-v client appears to refresh, but no applications appear. If you check the list of applications, they will be listed, but will not show any icons.
Solutions: resort to a startup script that writes that particular key or, better still, do not add that key to the preferences at all. The default registry setting on installation is %appdata%...

Finally, it may be worth remembering that App-v is currently only 32-bit. The 64-bit version is apparantly due in 2010...

Finding that out after two days of pulling my hair out does not do wonders for your health. Hopefully, this tip will help you avoid the same pain.