Monday, 31 March 2008

Code Signing for Click Once - The Free Way

When writing click once applications deployed over the web you are going to need to sign your manifest files with a certificate. In development you can quite happily test this by using the "create test certificate" option and use a certificate signed by yourself.


It's not ideal though - particularly if more than one person is developing and deploying the product - as each person ends up with their own certificate. Deploying to test becomes a chore as you have to uninstall from the test device each time to avoid manifest errors.


This issue can be compounded when you have components in the "partially trusted caller" category, such as infopath forms running inside the windows InfoPath form control. Each of these forms can also be signed with test certificates - the end result being anyone who makes a change to the form will be told that the certificate is invalid, resign it with their own, and have further fun deploying to test.

The answer is to make a common certificate for code signing - using the makecert.exe tool. For example:

C:\Program Files\Microsoft Visual Studio 8\VC>makecert -r -pe -n "CN=Temp Code Signing" -b 01/01/2005 -e 01/01/2100 -sky exchange -ss my

creates a pub/priv key signed certificate in the "my" section of your certificate store. You can export this using the cerificate store mmc snap-in, and use the "select from file" option to add the certificate into your project.



Now it is a project file, the certificate isn't called "P_Devenney" or some rubbish, and you can sign your infopath forms from it too. Each developer can sign with the same certificate, saving a load of test deployment hassle, and simulating a live scenario far better.



You can actually use this in a live environment too - if you accept it's disadvantage of being highlighted to the user as an unverifiable certificate. It does have an advantage over commercial code signing certifcates too - you'll see my cert was set to expire in 2100! Unfortunately most providers resign each renewed code signing certificate with a different private key - meaning that the end user has no choice but to uninstall and reinstall - having received scary warnings that the manifest is not from the same publisher! We actually use the makecert certificate in some intranet enviroments, as trusting the certificate once on each device is no real hassle.



Saturday, 22 March 2008

Selecting Content Types from the "New" menu gets the wrong content type

The scenario

  • Create 2 breand new content types/ Provide each one with a word 2007 document as the template (each with some different text to test success).
  • Attach both content types to a document library.
  • From the "new" dropdown on the doc library select the last content type on the list.
  • Add some text add save back. The content type attributed to the document will be the first one in the list (the default) *even though* we selected the correct type.
  • saving to disk and inspecting "item4.xml" inside the docx shows the wrongly selected content type. Correct the documents content type in sharepoint, download to disk and inspect item4.xml and you will see the correct content type.

This is a bit of a weird one. I'm going to do a bit of investigation and report back through my blog. It's rather frustrating when you have a different workflow that kicks off on the creation of a document of each content type

Friday, 21 March 2008

The B2B upgrader timer job failed

I found a great article on getting WSS 3.0 SP1 working after receiving the worrying "The B2B upgrader timer job failed" error message. I'm only linking it here in order to try and bump it up good so others can benefit.

Thursday, 21 February 2008

SQL Server 2005 Partition Problems

We have a replication scenario where have SQL Server 2005 replication publications on development, test and live servers. Recently we had an issue where test server publication, having not been touched for about 3 months (between releases) suddenly stopped synchronising correctly with the following message:

The merge process failed because it detected a mismatch between the replication metadata of the two replicas, such that some changes could be lost leading to non-convergence. This could be due to the subscriber not having synchronized within the retention period, or because of one of the replicas being restored to a backup older than retention period, or because of the publisher performing more aggressive cleanup on articles of type download-only and articles with partition_options = 3


The effect was that all data seemed to come down, but the result from the synchronisation attempt was failure. A bit of googling turned up an incredibly similar issue where the partion_options=3. There is a hotfix available on request for this exact issue. Helpfully microsoft say "change this", but leave you with the usual link-to-link hell of trying to find a posting in msdn that actually tells you how to make the change.

Now in our case we were actually using partition_options=0, but I found that following the process below fixed it for me:

  1. Set partion_options=1
    To do this you will need to use partition groups. Execute the following SQL against your publication database:

    sp_changemergepublication @publication='MyPublication', @force_invalidate_snapshot=1, @property='use_partition_groups', @value='true'

  2. Then, either run stored procedure statements, or as I did, through Management Studio -> Replication -> Local Publications -> "My Publication" -> Properties -> Articles -> Article Properties -> Set Properties of all article tables
    Change the "Partition Options" from "Overlapping" to "Overlapping, disallow out-of-patition data changes"

  3. Reinitialise the snapshot

  4. Change the partition options back to overlapping

  5. Run sp_changemergepublication @publication='MyPublication', @force_invalidate_snapshot=1, @property='use_partition_groups', @value='false'

  6. Reinitialise the snapshot.


Synchronisation seems to have worked fine from this point. As this was our UAT environment there was little actual harm done, but you'd probably want to take precautions in a live environment - as unmerged data on local devices would be lost in the above process.

Tuesday, 8 January 2008

OWSTimer Hogging Processor Part 2

It seems there was more...

The fundamental issue on this VM environment turned out not to be the time synch issue in all probablity, but a lack of resources in the environment.

3 Server pools had been set up, dev, test and live. VMWare allocated the resources 33% to each. There were 3 live servers, and one of the servers had been given an exact memory allocation equating to most of the resources available to the server pool.

Of course, if you only ever log into your VMs through RDP then your machine will tell you how much ram the VM image believes to be in its "Hardware". The effect - VMWare Infrastructure client reports that resources are not fully utilised (so no problems) but your VM client reports maximum processor usage (normally OWSTimer is the offender when you notice, or sometimes the mssearch service.

The answer in the case of this environment was simply to remove the server pools, and allow all the servers to contend normally for the resources they required. Server pools are a double edged sword....

Monday, 31 December 2007

OWSTimer Hogging Max Processor Time in VMWare

This issue isn't exclusive to VMWare, but is much easier to come across. You may notice your MOSS server being completely unresponsive for serveral minutes at a time. On investigation you find that OWSTimer is taking up 80%-100% processor utilisation. I have particularly found this occuring on VMWare MOSS installations, and often occurs straight after you restart a VM that has been off for a significant amount of time.

The issue here is clock time synch. If the SQL Server/SharePoint server do not agree then OWSTimer gets itself in a right tizz and maxes the processor. In a VM environment this might look particularly weird - as your VM Host may be reporting plenty of resources available.

To resolve this (in WMWare)


To resolve this do the following:
  • Stop and disable the OWSTimer and Windows SharePoint Services Tracing services
  • Install VMWare Tools if not already installed on the SQL and SP boxes
  • Right click on the VMWare System Tray icon and tick the "Time Synchronisation between the virtual machine and the console operating system
  • Ensure your VM host synchronises time from your internal time synch service (ie in synch with the AD controller)
  • Verify that all servers are time synched
  • restart OWSTimer and Tracing services

Why does this happen so much in VMWare?

Well, VMWare has issues when the CPU goes into powersaving modes, meaning that the clocks don't correctly calculate time (eg the VM image thinks it is running at 2.4Ghz, but is actually at 1GHz), therefore gets out of synch easily. The same effect seems to occur when you power up a VM image that has been left for months.

Tuesday, 6 November 2007

Search Server 2008 Express RC

Today Microsoft released a bombshell: Search Server 2008 Express

In summary:
  • It's free
  • Standalone product
  • All the features of MOSS search
  • Limited to one query server – so not scalable for the very large enterprise look at the other variations if you're after this

This is going to really change the game for the low end users.

It's in RC now, so I'd think the end of Q1 will be the target launch. I'll post something in the next day or so relating my experiences with it