DNX/CoreClr: and again…

A couple of months is a while when you haven’t touched things… so I needed a refresh on this set of errors:

dnx -p .\project.json run


The current runtime target framework is not compatible with 'GroundTrucks'

dnvm list


dnvm install latest -r coreclr


dnx -p .\project.json run


“…does not contain a static ‘Main’ method suitable for an entry point”

Fair enough…




Out of interest, I backspaced on the 23427 above, to see if anything newer is on offer. There is, so I’ll take that:


That leads to this error:


“Error: Dependencies in project.json were modified. Please run “dnu restore” to generate a new lock file.”


Key commands again then (I know I’m repeating stuff, but this might in fact be the first time I’ve put them in one place):

dnu  restore
dnvm list
dnvm install latest -r coreclr
dnvm install latest -r clr
dnx -p .\project.json run dnu pack --configuration release


And finally for tonight, I wanted to find out where the DLLs get stored by default. While this does not directly answer that, it is good enough (thanks, Author).

That means I can run..

dnu pack –configuration release



Run a bit of ildasm to see how it all looks:






PowerShell:recursively find and update text


This was um inspired by a requirement which boiled down to this:

Step 1

Given a root folder, first tell me all the extensions that you find under that root, so that I can make a judgment about which file types I want to update, so that their text content. For example, if I saw .xml and.txt, then I might want to update those, but if I saw, say .jpg and .JPG and .giF, then I don’t want to update those. And note I want a case-insensitive search doing: in the jpg case above, I expect you to find the .jpg, and .JPg, and jpG etc.

So let us do that first:

$uniqueExtensions = gci -Recurse | Select-Object Extension -Unique


I have built a set of dumb folders and files to test this. See here.

$searchString = ‘bank’
$replaceString = ‘tornado’
$rootDir = ‘C:\temp4’

cd $rootDir
“*** Searching for files in [$pwd] containing [$searchString] ***”

#gci -Recurse | Select-Object Extension -Unique
$fileList = gci -Recurse -Path .\* -Include *.txt, *.ext

$fileList | % {
$file = $_.FullName
$content = Get-Content $file
if ($content -like “*$searchString*”) {
“[$file]: found search string [$searchString], replacing with [$replaceString]”
(Get-Content $file).Replace($searchString, $replaceString) | Set-Content $file


See here for the code as well, and here for a set of folders and files you can use to test it.

SQLServer: random data continued

There are many posts which refer to Visual Studio’s ability to generate data for SQLServer randomly. That sounded great. Redgate-like capability for zero (extra) cost. Turns out there are no references post VS2010, and I’m not going to keep a copy hanging around just for that. (And at £300 per seat for the privilege, my company will not be buying Redgate, much as I like it.)

Also turns out that people use a combination of rand, abs, checksum, newid and modulus, to generate random data. I should add I looked into ways other than gen_crypto_random because that does not play nicely in a function, and I don’t propose writing lots of inline code repeating the same pattern time and time again.

All the necessary information for a way forward is here. The article is beautifully simple – absolutely nothing needs adding.

However… :-).. so now I want as above to use this in a function, something like this (just playing…)



Really? I am visibly not changing the state of the database in getting a guid, surely. So it will all have to be inline  – what a pain.

Maybe if I call out to CLR…

Another option is to create a temporary table stuffed full of the checksums for a million GUIDs, and then use that as our tally table that the function calls into. But that does mean we have to pass an index into the function, so that we’re not getting back the same hash every time. Hm, maybe the CLR route might give us a way.


An SSD to replace my HDD

In my 2012 I3 laptop, I replaced the old 512GB hard disk with a 240GB SanDisk SSD.

These are the numbers…

With the HDD in place:


With the SSD in place:


The improvement in speed is of course immediate, i.e. I am pleased, in case that fact doesn’t come across 🙂

Being greedy however, I did notice that SanDisk claim these figures for this disk, which is at the low end of their pricing:


I’m getting a bit better than half that 520 number. Perhaps the SanDisk “dashboard” running on the I3 explains this:


There is zero chance that I will upgrade the interface – it’s good enough as it is.

I paid £40 for this cheaper model (currently about £55 retail).

The same size in the Ultra II is £63 right now, and £89 for the Extreme Pro:


While the 240GB is quite small, I don’t need a lot of volume for things that run on my main laptop, so this price and size suits me just fine.

October 2016

I now need a bigger drive due to the size of the NI Komplete 10 suite I bought recently. While I wait for a decent offer on a 1TB SSD (£150 has been seen in the past, don’t know if Brexit and the pound sterling rate will now affect that), I’ve been looking at 1TB SSHD. The best of the bunch seems to be WD. Good review here.

What I can’t emphatically tell is what it will mean in MY real world. These are the numbers from that review. You can make your own conclusions compared to the HD and SSD above:

SQLServer: random integers

The Rand() function in SQLServer is next to useless for anything set based


select * into t_1 from sysobjects
select rand() from t_1

I’m not going to add it in, but the addition of a seed helps not a jot.


However… CRYPT_GEN_RANDOM() is your friend (the performance overhead might need review, but  I don’t care for my use-case, which is to generate test data):



More interesting:


That variety gives me everything  I need, and can be also used as the basic for randomizing dates in the past and future using date arithmetic, and floats.

select * into t_1 from sysobjects

select rand(), CRYPT_GEN_RANDOM(8),
abs(convert(int,CRYPT_GEN_RANDOM(4))) from t_1



.Net:a client for an IIS-hosted WCF service

This is the third and last post on the basics of WCF usage. I use SoapUI to consume the WSDL which exposes the single method we created in the first post, and hosted using IIS in the second post. None of this covers how to install SoapUI, although hopefully there is enough here to show you how to use it.


At the end of the previous post, we had access to the service wsdl via the link in blue:


Click on the link, and from the URL that comes up next, copy that to the clipboard:


Start SoapUI, and create a project as shown here. My PC is called [i7], so adjust your entries to match your setup, and then click OK:


After that, the left side panel should look something like this:


Double click [Request 1] and the main panel looks like this:


Just to see what happens, click the green play button in the TLH corner:


If you now put e.g. 101 rather than the default [?], you get…


, which is exactly what we expect and want.


.Net: Hosting a simple WCF service in IIS

This assumes you have read the previous post, which left us with a simple service. However, it was only hosted/running while we were running in debug mode in Visual Studio. We don’t want to be, and can’t be dependent on a running VS instance. This looks at how to take that service and host it in IIS. It does not involve using Visual Studio.


This is the folder that contains our code and binaries for the service:


Turning to IIS, this is how it looks before I change things:


Add an application…


Actually, I shall interrupt this to bring the service files under the inetpub/wwwroot folders – most of them anyway, and then whittle away at them to confirm the minimum set I need:


In fact, I’m fairly confident about that minimum set, so I’ll just go ahead and cull them giving this (note that I have renamed app.config to web.config):


… and the dll.config may go as well, don’t know yet:


Now we go back to IIS and add all that as a new application to the Manager:


OK, and now we have this:


Now, just naively browsing to the root URL for that will get us nowhere:


Reminding ourselves of the content of that root folder shows only 2 files, both of which are important… and wrong right now:


web.config: both the highlighted entries should be removed (but keep a copy of the HelloAcmeService/HelloService entry for the svc file in the clipboard). You will/may need to be in admin mode to edit web.config under the IIS nodes:



This is wrong in that the Service tag is pointing at the interface, and should be referencing the implementation, that is, IHelloService needs to change to HelloService:


After those change, this is now what we see if we Browse Website in IIS:


This location will never have a default or any readable web page – it is all about delivering a Service. If we now put an appropriate entry in the URL, we see…

Hm, interesting… this is symptomatic of specifying the interface, not the implementation:


And wadya know, I hadn’t actually saved the change (or maybe the admin rights prevented it) after taking the screenshot. So making sure I press Save after this edit (in fact I found I needed to stop the web site in IIS first):


Now we go again to the URL:


That is good, if it is not obvious. That’s enough for this post, I think. The next post will talk about the wsdl, and using that in SoapUI to test the service from that angle.