Quantcast
Channel: Jason Lattimer's Groups Activities
Viewing all 869 articles
Browse latest View live

Visual Studio 2017 - Dynamics 365 SDK cant connect to server

$
0
0

1. I downloaded the Developer extensions and the developer toolkit. (vsix for dynamics)

2. Updated the manifest with the version 15 instead of the 14.

3. Got both installations completed 

4. Instead of map the folders installed I mapped the path to the most recent version of dlls.

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/download-tools-nuget

And now I cant make the connection. When I tried to start a new project I get the window that asks if you want to reconnect to crm and it shows my server: organization: and user: do you want to use this connection I click yes and then I get this error:


Log Azure DevOps Build Times to Application Insights

$
0
0
I’ve been working a lot with Azure DevOps recently to automate Dynamics 365 CE deployments. One of the first things I noticed was that using the hosted build agents resulted in slower builds than I was used to, comparatively speaking anyway to performing the same operations on your local machine. It’s not surprising given that for each build the agent is spun up and torn down and nothing can be re-used from previous runs. Whereas on your local machine NuGet packages are already downloaded, any build tools are already installed, etc. so naturally it’s way faster. Building a simple plug-in and running a few unit tests was taking close to 2 minutes. Which is fine I guess considering rarely would anyone be staring at it waiting for it to finish. The thought though was that once you started building real projects with lots of unit tests the build could end up taking quite a long time to finish, potentially backing up other processes. I figured it would be a good idea to keep an eye on how long they’re taking by showing the average time for a build on a dashboard in hopes that it would draw attention if the times started getting long without anyone really noticing. I checked the Visual Studio Marketplace but didn’t find any existing tools to handle this so I thought I’d put one together using Application Insights to store the data. There was already a free dashboard widget available which could surface that data so I figured it would be the easiest route to take.

I wanted to log the time at the end of the build (knowing the results will be off a few seconds) as opposed to relying on some other process like a Web Hook / Azure Function combination. I decided to use a PowerShell task as the last step of the build to do the logging.

Have a look.


Here’s a quick rundown on what the code is doing.

Initial portion if calculating the number of minutes based on when the build started and when the code ran. I’m using one of the predefined variables called System.PipelineStartTime to establish the start. I’m not actually sure that’s documented anyplace as far as I can tell but if you were to do a dump of all the system variables it shows up. The number of minutes you end up with is relatively close to what is reported if you looked at the completed build in the UI, give or take 6-10 seconds. I figured that was an acceptable margin for error since you’d probably never look to optimize a build if you were only going to gain a few seconds.

You’ll need to provide the Instrumentation Key for your Application Insights instance so there’s a basic check to make sure that was provided. Of course store that in a Variable Group or at a Build Variable instead of hard coding to make changing it easy. There’s also the side benefit of hoping that nobody trolls my GitHub account because I had example code with hard coded credentials to simplify things for those people who are still learning. :)

I chose to use the Metric type in Application Insights to record the data being that it’s numeric in nature. You can see the JSON body of the request that is going to be made, most of it is the boilerplate request. There are two places where you can make changes to the data being sent. Under ‘metrics’ the ‘name’ can be changed to anything you’d like. I used “Build Time” to describe the type of metric being recorded and the the build definition name so I knew which build it came from. You could make it anything but be aware that this is used for aggregation so you won’t want any dynamic values here. The other place is under ‘properties’. Those can be any key/value pairs you want, so if you want to capture detail about the build agent, values from the predefined build variables, or something else entirely you can put it here and you’ll be able to query on it in Application Insights.

Finally is gives a check to see if the request was successful. It certainly doesn’t hurt to be verbose with the logging. Trying to debug failed builds when you have to wait 2+ minutes between runs is painful so do yourself a favor and do whatever you can to speed that process up.

Another quick note, I’m using 2 different formats when referencing variables and that is on purpose. The majority are in the format like $(Build.DefinitionName) but the instance where where the pipeline start time is used it’s in the format like $Env:System_PipelineStartTime. The reason being is that I have this script in a Task Group (which basically combines multiple steps into 1 that can be reused) and by default it will scan through all the variables you’re using and turn them into parameters. In this case I want the value to be retrieved at run time instead. I could have jumped though a few hoops and exported and reimported the task group after editing by hand but I figured out that just using $Env: instead causes it to be skipped when parameters are gathered.

In a future post I’ll walk through getting the information on an Azure DevOps dashboard.

Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 1

$
0
0

What was I hoping to accomplish by doing this?


Working with Azure DevOps build and release pipelines can be a slow and tedious process, even more so using the hosted agents that are provided. I find it hard to keep track of filenames and the folders they end up in when I’ve got to keep in all in my head because it only exists for a short time and then is gone. It’s further complicated when needing to dynamically create any of these values. It feels a lot like trying to debug a plug-in only using the trace logs. Change code, deploy, execute, wait, review the log, and repeat. The difference is the waiting part is measured in minutes rather than seconds.

The goal was to create a build server that could be used to build and test things developed for Dynamics 365 CE. So that means being able to build and test .NET based plug-ins / workflows, JavaScript / TypeScript, run EasyRepro / Selenium UI tests, and be able to deploy as needed. All that, plus be faster because I’m impatient.

Containers at a high level


Prior to a few weeks ago I hadn’t put much thought into the concept of containers since the majority of Dynamics 365 CE development is done right inside the application. When scenarios came up where a plug-in wouldn’t work that functionality ended up on an existing VM that ran integration jobs or turned into an Azure Function. Functions and VMs really are very different from one another if you think about it. An Azure Function is cheap, easy to set up, and can scale but lacks flexibility when it comes to getting at lower level functionality that isn’t exposed or being able to install additional components. A virtual machine is usually expensive to run, requires constant maintenance, and is slow to start up but provides the ability to use a wider and more complex array of software.

In my eyes a container falls nicely in between. Using Azure to run the container you’ll end up paying for storage space for the images which will certainly be more than a Function but probably not more than a VM. A Function and a VM both bill based on compute time. The big different is that when a Function isn’t actually processing something it’s shut off and not adding to the bill. A VM on the other hand is accruing compute time as long as it’s turned on, whether it’s doing work or not. The pricing model for a container is closer to that of a VM but the rates appear to be cheaper and costs are calculated per second as opposed to per hour. Turning things on and off to reduce costs is more suited to containers as they can often be up and running in a few seconds while a VM could easily take a minute or more to full start up and get itself into a state where application can run.
To get an idea of the costs here’s what this is costing to run:
Roughly $114 / month if you left it running 24/7. If you turn it off when not in use then you’ll see the Container Instance costs drop.

Management is easier using a container versus a VM. On the VM there is the worry about patching and all the possible ways someone could hack in because of the various services running, open ports, etc. Windows based containers don’t run a full blow copy of the OS but rather a scaled down version (Nano Server or Windows Server Core) based on a specific build of the full OS. Less features, less chance for someone to exploit something. The other point is that these operating systems aren’t made to be patched in the traditional sense of running Windows Update. When it’s time to update you’re basically installing whatever components again from scratch on top of a new version of the OS image. Sounds painful but it’s really not once you’ve got the scripting in place (but up until that point it is very painful).

For more on containers: https://www.docker.com/resources/what-container


Plug-in compile & unit test build time comparison


I’m going to skip over the container build for the moment (covered in part 2) and go right to the end to show what kind of difference using a container made. The test case I used was compiling a bare bones plug-in and running a single unit test. As you can see from the time breakdown I think I managed to achieve what I was looking for.

Hosted VS2017 Agent
TaskTime
Queue time1s
Prepare job<1s
Initialize agent<1s
Initialize job7s
Checkout14s
NuGet Restore 1m 7s
MSBuild - Build Solution54s
Visual Studio Test Platform Installer8s
VsTest – Execute Unit Tests35s
Publish – Test Results5s
Post-job: Checkout<1s
Report build status<1s
Total3m 14s

Private Agent Azure Container Service
TaskTime
Queue time1s
Prepare job<1s
Initialize agentN/A
Initialize job<1s
Checkout3s
Command Line Script - NuGet Restore4s
MSBuild - Build Solution8s
Visual Studio Test Platform Installer2s
VsTest – Execute Unit Tests14s
Publish – Test Results4s
Post-job: Checkout<1s
Report build status<1s
Total 38s

So what are the differences?


Queue Time
Both were 1 second when only running 1 build at a time. Each agent can only run 1 job at a time by default without getting into parallel builds, multiple agents, etc. When you start lining up multiple builds back-to-back the queue times on the hosted agent are going to be considerably longer.

Initialize Agent
Not applicable for privately hosted agents.

NuGet Restore
These packages needed to be restored for the test I ran:
  • FakeItEasy
  • FakeXrmEasy.9 (@jordimontana)
  • Microsoft.CrmSdk.CoreAssemblies
  • Microsoft.CrmSdk.Deployment
  • Microsoft.CrmSdk.Workflow
  • Microsoft.CrmSdk.XrmTooling.CoreAssembly
  • Microsoft.IdentityModel.Clients.ActiveDirectory
  • MSTest.TestAdapter
  • MSTest.TestFramework
On the Microsoft hosted agent, NuGet.exe is already installed. Using the NuGet build task, before it attempts to download any packages it first needs spend a couple seconds registering NuGet.org as a package provider. Then it downloads and installs all the packages because nothing is cached. This was particularly long on the build I’m using for comparison at 1 minute 7 seconds but the faster times still were taking in the neighborhood of 40 seconds.

On the container I pre-installed NuGet.exe so instead of using the NuGet build task I used a Command Line Script task and executed something like:

"C:\Program Files\NuGet\nuget.exe" restore $(Build.SourcesDirectory)\TestPlugins.sln -Verbosity Detailed –Noninteractive

After the first run of this build, all those packages were cached locally and available so it took only 4 seconds.

MSBuild - Build Solution
I couldn’t find anything referencing the specifications for the hosted servers. The Azure Container Instance had 2 vCPUs and 2 GB of memory. I suspect that’s more than gets assigned to the hosted agents and as a result the build time is considerably faster.

Visual Studio Test Platform Installer
This is an out of the box build task which installs VSTest.Console.exe needed to run .NET unit tests. In hindsight this step probably wasn’t needed on the hosted agent since it’s already installed by default.

I spent a fair amount of time trying to get this installed on the container image without success. Again in hindsight it would have been easier to install a full copy a Visual Studio 2017 (which would have included this) instead of trying to install the bare minimum number of components I thought I’d need for a capable D365 build & test server. The flip side though is the container image becomes larger, more costly, and more cumbersome to deal with. The bright side is that once it’s installed it’s available for future use without re-downloading and re-installing. The build task is smart like that and first checks if it’s there before blindly installing. That 2 seconds was just to check if it was installed. The bigger reason I wanted to get it installed was to simplify and reduce the number of steps a person would need to go though to create a build. It’s just one more thing for someone new coming in to forget and have to waste time on because the tests won’t run.

VsTest – Execute Unit Tests
I again attribute the difference to the virtual hardware specs likely being better.

Part 2 will cover what went into the creating the container.

Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 2

$
0
0
In Part 1 I covered the reasoning for choosing a container and the performance gains that resulted in the build process.

Creating the container image

I’m not the first person to think of using a container for an Azure DevOps build agent by any means. There are a number of people who’ve blogged about it and you can get pre-configured, Microsoft created Azure DevOps agent Dockerfiles from Docker Hub. The problem with what’s on Docker Hub and most of the blogs is that most everything is Linux. Fine if you want to build .NET Core projects but Dynamics 365 CE needs the regular .NET Framework which isn’t going to run on Linux. From what I could see, Microsoft doesn’t have a Windows based agent on Docker Hub so I set off to build my own.

Remember the goal was to create a build server that could be used to build and test things developed for Dynamics 365 CE. So that means being able to build and test .NET based plug-ins / workflows, JavaScript / TypeScript, run headless EasyRepro / Selenium UI tests, and be able to deploy as needed. Here’s what I came up with. I’m sure it’s probably not the best or cleanest way of going about this, but it works.

Dockerfile



start.cmd



start.ps1



Dockerfile


Image Creation Basics
As you can see, to get software above and beyond what comes on the base OS image you need to write PowerShell or DOS commands to obtain and install.

When looking through the script and you see this, that means the follow commands are executed in PowerShell.

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]

When you see this, that means the follow commands are executed at the Windows command prompt.

SHELL ["cmd", "/S", "/C"]

Everything that get executed starts off with RUN (environment variables are ENV) and you’ll notice that multiple commands are chained together to execute in one batch. The reasoning for this is due to the way Docker assembles or layers the images. Each RUN command creates a new layer on top of the OS with the changes made. Layers are cached for reuse as the image is assembled. The more items you can group together, the less layers you end up with. This can be a good or bad thing.

If there’s a failure building the image on a particular layer, instead of starting over from the beginning (and potentially wasting a lot of time) it will used the cached layers and start again at the one that failed. If the failure happens in the middle of 12 items chained together, it will have to start over at the first item in that layer. If you successfully create an image and then make a change someplace in the middle, it will use the cached images prior to the change but then need to re-perform all the actions on subsequent layers as that change might effect them. As such, it’s better to put the items that will change the least first and then work your way up toward ones that might get versioned more frequently. If I revise this image I’ll probably put more thought into that and reorder some things.
Some of these commands aren’t easy to test and escaping characters combined with environment variables and file paths with spaces in them gets pretty confusing. I ended up doing a lot of trial and error to get some of those combinations correct which was frustrating but it would have been even worse had the image creation needed to start from scratch each time.

Base image
I ended up using this image: microsoft/dotnet-framework:4.7.2-runtime-windowsservercore-ltsc2016

I had originally picked a later build of Windows Server Core but when I went to deploy to the Azure Container Registry (ACR) I found out that it doesn’t support that version of the image so I had to start all over with this image. ACR only supports Long-Term Servicing Channel (LTSC) versions of Windows. LTSC being versions of Windows that will be supported for 5 years and then have 5 additional years of extended support. It should also be mentioned that you don’t need to acquire the base OS for the container if you’re using something from a public repository. If you are using Docker to build the image, it will automatically download the image for you to use. This could be just an OS, or an OS with software already installed. In that case buyer beware I suppose. As far as licensing is concerned, Windows 10 allows you to run containers for development purposes. For production use, the Windows Server license of the host covers containers. I’m assuming then if you’re running the container in Azure, licensing is also covered.

NuGet
Downloads and installs NuGet.exe so we don’t have to use the Azure DevOps NuGet build task later.

Visual Studio Build Tools & Web Targets
This downloads and installs the Visual Studio test agent, base build tools, and the specialized build tool packages for the different things related to Dynamics 365 CE we might want to build. It also removes the install files once complete. You’ll notice it’s installing the test agent but it doesn’t seem to be compatible with what is required for running tests from Azure DevOps. What it does do is ensure the agent reports to Azure DevOps the test agent is installed to satisfy build task demands. If the agent doesn’t show it meets the demands or requirements needed to run the build, it won’t even start. You could skip this and manually edit the list of demands for the agent in Azure DevOps, but the downside is that you’d need to do it for every hostname the container gets. When you stop and start a container it receives a new and different hostname which then shows up separately in the list of build agents. Not a big deal but after awhile you’ll end up with a long list of dead agents that will need to be manually deleted. There’s probably an API someplace for that but the ideal solution would be to figure out a way to set or keep the same hostname each time. That might not be possible considering you can start any number of container instances and having the same name probably wouldn’t fly.

Path Variables
So things like NuGet can be executed from anyplace in the file system.

.NET Targeting Packs
For building against different versions of the .NET Framework.

Build Agent
This installs the build agent software so the container can communicate with Azure DevOps. This url comes from Agent Pools page.

PowerShell Modules
This sets up NuGet.org as a package provider and installs the Azure and Dynamics 365 CE specific PowerShell modules we might need later. Having these pre-installed should prevent the need to import them each time in scripts requiring them.

Suppress Error Dialogs
Not sure this is 100% necessary but these registry entries should prevent Windows error dialogs which might hang something up.

Chocolatey Installs
Downloads and installs Chocolatey which then can be used to download other software packages. In this case Git, Node.js, and the Azure CLI. If you aren’t familiar with Chocolatey it’s package management for applications. Lots of commonly used software can be installed and updated from here via script.

NPM Installs
Grunt and gulp task runners.

Google Chrome Install
I tried installing Chrome via Chocolatey but ended up having problem getting it to work. It’s hard to say if that version was the issue or if something else might have been wrong with the image during the testing. Once installed it appears to be v71 which is the latest while I’m writing this. I’m not 100% sure what’s special about the version that comes from the url I’m using but it seems to be used widely across the internet for silent installs. I should also point out that EasyRepro currently supports v2.38 of the Selenium Chrome driver. That version only lists official support for Chrome v65 – 67. So maybe it’s just luck that it works and this is an unsupported set up. I tried updating the Chrome driver to a version which supports the latest Chrome browser version and then had issues running the tests. Since that’s all kind of a mess I’d recommend just sticking to a known working combination like this.

Start.cmd

Just used to run start.ps1.

Start.ps1

This is used to configure and start the build agent when the container starts. I pulled this from the blog which is noted at the top. This needs your Azure DevOps instance name and a Personal Access Token (PAT) in order to connect to your agent pool. I’m sure there is probably a better way of handling those values other than hard-coding them into this file, I just haven’t had a chance to look for it yet.

Conclusion

I’m glad I took the time to work through setting up a container because I learned a lot. I’m also glad I wrote these 2 blogs, when digging into the how’s and why’s I learned a lot more.

Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 3

$
0
0
Part 1 covered the reasoning for choosing a container and the performance gains that resulted in the build process.

Part 2 reviewed the components that were chosen to be installed to support the build process.

Setup

Maybe a little bit out of order but this is going to focus on getting up and running with containers on Windows. Again there’s almost certainly different ways but this is what I ended up doing.

Download & install Docker Desktop for Windows– you’ll need to create an account on Docker Hub before it will let you download. Select the option to use Windows containers (if you miss it you can switch later) during the install. When that’s done start it up and log in with your Docker ID. When you see it running in the system tray, right-click and see if any of the menu items say “Switch to Windows containers”. If so, select that. Either it doesn’t get set correctly during install or it’s doing something else that I’m unaware of.

If you want to create the image I was using, download the 3 files from Part 2  or get updated copies from the GitHub repo and put them in a folder. Edit start.ps1 with your Azure DevOps instance name and Personal Access Token.

Open a PowerShell prompt as Administrator and navigate to the new folder. 

Building the image

Here’s the command to start building the container. The period at the end is part of the command. The “-t” flag allows you to assign a name of your choosing and a tag like a version number to the image. The “-m” flag here is capping memory use to 2GB. This configuration will used any cached layers in the process to save time.

docker build -t d365agent:v1.1 -m 2GB .

If you don’t want to use layers from the cache for whatever reason you can disable the cache by adding the no-cache option.

docker build –no-cache -t d365agent:v1.1 -m 2GB .

Prior to doing this I’d recommend that if you are on a laptop or have power saving options enabled, you might want to adjust so that your machine doesn’t go to sleep while this is running. That seems to hang the process up and you’ll need to restart from the point at which that happened. If it does happen to stop, you can restart from where you left off by running the same command (with cache enabled). If you make changes prior to restarting, the process will start at the lowest level that was changed.

You should see it start downloading the different parts of the Windows Server image and then starting running through the installation steps defined in the Dockerfile. This process is going to take awhile the first time through since it’s going to download close to 6GB. Once it’s downloaded the base OS image will be cached for future use. Keep in mind as well all the items getting installed need to be downloaded so you might as well grab a coffee and come back because it’s going to be running for awhile.

When that finishes we now have an image that can be used in a container.

Creating and running a local container instance

You can create a new container instance using this image on your local machine using the following command. We’re telling it to automatically restart in case it gets shut down, specifying some names, and finally telling which image to use.

docker run -it -d --restart always --name "d365agent" --hostname "d365agent" d365agent:v1.1

Now that container instance is created you can use the start and stop commands to turn it on or off.

docker start d365agent

docker stop d365agent

If you want to access the filesystem of the running container use this command and you will be dropped into a PowerShell session in the container. This is good for verifying things got installed properly in case the logs during the build process don’t provide enough information.

docker exec -it d365agent powershell

Use the exit command to end the session. Be mindful of which filesystem you’re in, it’s easy to forget at times which you are connected to.

If you’ve provided the correct Azure DevOps information, you should be able to check the available agents and see your container. If you don’t specify an agent pool name in start.ps1, it will go into the default pool. Now when run a build and specify this agent pool it will run on your local machine.


Creating an Azure Container Registry

Next you’ll want to create an Azure Container Registry which will be private by default. There aren’t many options to pick from so I’m not going to cover it. If you want screenshots refer to the quickstart. The only special option I picked was to enable the Admin User. All that does is allow you to use the registry name as the username when publishing images. In reality you maybe wouldn’t want to do that for a production instance seeing as if someone figure out your registry name they would also then have the username leaving only the password to figure out. Unlikely but anything is possible. Otherwise it’s an individual login or service principal.

At this point you can go through the process of setting up GitHub integration and build your image in Azure but I’m just going to upload the one from my local machine that I’ve already created.

Again open a PowerShell prompt as Administrator and navigate to the folder with your Dockerfile.
First you’ll need to log into the Azure Container Registry with this command replacing MyRegistry & AccessKey with the appropriate values from Settings –> Access Keys.

docker login MyRegistry.azurecr.io -u MyRegistry -p AccessKey

Next tag your local image for the Azure Container Registry with this command. It’s pointing at the local image we created earlier and it’s going to be placing it into a repository named “d365/d365agent”. The first “d365” acts as a namespace to categorize different images.

docker tag d365agent:v1.1 MyRegistry.azurecr.io/d365/d365agent:v1.1

Then use this command to upload the image.

docker push MyRegistry.azurecr.io/d365/d365agent:v1.1

It will need to upload all the image layers so again there will be some waiting involved.

Creating an Azure Container Instance

The only thing left to do is create an Azure Container Instance from the image. If you navigate to Repositories you will see “d365/d365agent”. If you drill in further you’ll see the initial version of the container image. From there you’ll have a Run Instance option. Once again the steps are pretty simple but if you want the visuals check out the quickstart.

Make sure you choose Windows as the OS type since it’s not the default. Use at least 2 GB of memory and I’d try for 2 processor cores. You might be able to get away with 1 if you aren’t using EasyRepro. When I tried to run those tests with just 1 core it consistently failed. Take note of which Resource Group you use as you’ll need to go there to manage the container instance once it’s created.

Once that starts you can sit back once again and wait while the container is provisioned.

Once that’s finished you should see something like this in Azure DevOps.

If you aren’t actively doing anything with the container instance I’d shut it down to save money.

Get Latest Solution Patch Name With PowerShell

$
0
0

Someone might find this useful if trying to move solutions around using Azure DevOps & PowerShell. I wanted to automate export of the latest solution patch using the Microsoft.Xrm.Data.Powershell library as part of a build pipeline. It’s pretty straightforward but it requires the solution name, simple for the base solution but when dealing with a patch it’s not possible to predict what the name is going be in order to build it into your script.

I came up with this. Given a solution uniquename this sets build variables for the uniquename & version of the latest patch of a solution or the base solution's uniquename & version if no patches exist.

Using Chocolatey to Distribute Developer Tools - Part 1

$
0
0
First off, what is Chocolatey? Chocolatey is a package manager for applications. So NuGet for Windows quite literally as the package formats are the same. There’s one or more repositories (again it’s just NuGet) that keep track of all the packages available and then you connect to it using a client and download and install an application. It’s similar to NuGet in the sense that the community can contribute packages but there appears to be a lesser amount of contributions by “official” sources like Microsoft, Google, Apple, etc. However that doesn’t mean you aren’t able to download and install software from these companies from Chocolatey. Applications like Visual Studio, Chrome, and iTunes have all been download, most likely from a publicly available link, and converted into a package by someone in the community. I should call out right away that this isn’t meant to circumvent licensing or pirate anything. The original applications aren’t being modified in any way, just having some scripting added around it to automate installations. Since you’re basically relying on “some guy in his basement” to create these packages there is some amount of risk still that you could download something harmful. Granted the packages are virus scanned and community moderated but things can still slip through. No different than downloading something off NuGet, GitHub, or even the XrmToolBox for that matter. This just happens to be one of many approaches you could take if you are in a position to assert more control over the things your users / developers are downloading and installing. Making commonly used software easily available for people to install themselves lessens the need for them to search around the internet and potentially downloading something bad. Not to mention, self-service software installation can take some of the burden off the support team and free them up to do or things.

What goes into a package?


At the core there is a XML file which contains the metadata about the package, things like title, version, description, licensing info, dependencies, etc. can all be found here. This will be the source of information everyone sees in the download feeds. Instead of DLLs like a typical NuGet package, you’ll usually have an executable file of some sort or the files needed to run the application if an actual installation isn’t required. Around this is wrapped some PowerShell code which takes the downloaded content and performs the tasks required to get the application to a useable state. Ideally there should be no interaction from the user as one of the big selling points of Chocolatey is for system administrators to use it to silently install and manage software across a large number of machines. Each package is versioned by the repository so you can always go back to a previous version if need be and just like NuGet packages are immutable. Once it’s published that’s it, no changing it without increasing the version number. It can be painful if you’re developing packages but for consumers it can prevent the old bait and switch by tricking people into downloading something bad which once worked perfectly fine. Chocolatey will also keep track of what is installed on your machine and make updates available as they as released. Additional PowerShell can be put in place to run during the uninstall process so package creators can add any code required to assist with any cleanup that might need to happen on top of the application’s normal removal process.

But I’m a Dynamics person and not a system administrator


True, this is maybe geared a little more toward organizations rather than individuals but that doesn’t mean it couldn’t be put to use for personal use. For developers, just think about the next time you need to rebuild you primary development machine. Hunting down all the installs and clicking through everything is about a day long process, at least it is when I need to do it.

This is my example use case – creating a way to more easily install the Dynamics 365 CE SDK tools (Plug-in Registration Tool, Package Deployer, etc. ). Back when there was a single download for the SDK it was easy because all the tools were right there. I can understand why managing that was probably difficult and going to a solely online only based SDK was the way to go but it also introduced some new challenges making these tools available. The new process is documented on how to download them from NuGet using PowerShell but I’ll still say it would have been easier just to point people to the manual download link and then rename .nupkg to .zip and be done with it. Imagine this situation, you’re dealing with someone who isn’t really a developer but has just enough knowledge of the platform to know how plug-ins work and you’re trying to describe to them over the phone how to edit a plug-in step. You’ll end up asking them if the have the Plug-in Registration Tool installed and of course they’ll say “no” so then you start telling them them need to download it from NuGet and they’ll usually end up responding something to the effect of, “WTH is nugget?”. So then you just ask for remote access and do it yourself.

Getting started


Install Chocolatey
https://chocolatey.org/install
Now you’re probably thinking that this is all command line stuff that you don’t want to memorize just to make it “easier” to install the latest version of an application. Not to worry, there is a UI that can be installed to make things a little more friendly.

Install Chocolatey UI
https://chocolatey.org/packages/ChocolateyGUI
At this stage you should have Chocolatey up and running and can download packages the community has already made available.

Creating and hosting your own packages


You could go through the process of creating a package and then upload it to the main Chocolaty site but obviously if you’re creating something solely for personal or private use, public hosting it isn’t an option. The other thing I’d throw out there is that it might be a violation of some terms of service or other legalese somewhere that forbids redistributing copyrighted material in this way, so if you don’t want “the lawyers” giving you a call you better look for another alternative.

Remember way back in the first paragraph where I mentioned that Chocolatey and NuGet are basically the same thing? That means there are a number of ways to create your own private NuGet server. The software itself is open source so it if want to stand up your own copy and manage the hosting that is an option. There are also some paid services / products which may work for you. ProGet offers a commercial product that you run on your own server (which is a pretty good deal if you’ve got a lot of users and need AD integration) or MyGet which is a cloud based offering that has paid and free plans.

Instead of those I’m going to focus on using Azure DevOps to host packages & feeds and to automate the package maintenance process. Azure DevOps is free up to 5 users or included with certain MSDN subscriptions. Package management (Azure Artifacts) is a paid add-on which is available with the same free access and then charges per user, per month after that which last time I checked starts to get pricey for when you’ve got a lot of developers.

In part 2 I’ll go into the project & package set up and the code used to keep the packages up to date. As I mentioned earlier there will be examples showing how to convert the SDK Tools into packages as well as how you can replicate packages from other feeds (like Chocolatey.org) for use in your own curated feed. Part 3 will covert how to create Artifacts in Azure DevOps and how you can use a Build Pipeline to watch for new versions and automatically update the packages. That last part is probably the most important. Creating an installable package to only use it once is a waste of time. The real benefits come from installing or updating multiple times on multiple machines.

Using Chocolatey to Distribute Developer Tools - Part 2

$
0
0
This is going to focus on the Chocolatey package set up and the code used to build updated packages.

Project structure


A single git repository which contains folders for each package / application / tool. 
Each package folder contains the following:
  • .nuspec file which provides the metadata about the package
  • PowerShell file containing the scripts that will go into the Azure DevOps Build Tasks to:
    1. Build the updated package
    2. Push to the NuGet feed
    3. Commit the updated files back to source control
  • Tools folder containing 2 PowerShell files
    1. chocolateyInstall.ps1 which handles the installation once it’s on the target machine
    2. chocolateyUninstall.ps1 which handles the uninstallation from the target machine

.nuspec file


You can choose to change the metadata to whatever you’d like but I’m going using what the original packages contains. Later you’ll see in the code used to update the packages, I’m just reading from the downloaded content and updating the matching elements.
The package <id> you’ll want to change to something unique, at the very least a variant of the original. Assuming you’re running the package update process on the same place as your feed (like Azure DevOps) you might run into an issue where it uses your private feed first to retrieve the package as opposed to the public source you actually intended. I’m guessing that’s by design but the issue I ran into popped up in the script comparing the version in the .nuspec in source control versus the package that is publicly available. Since we aren’t specifying a version (because we won’t really know what it will be once this process is automated) it’s pulling back the first copy it finds based on the registered sources. When the compare happens it sees the same version and aborts because it doesn’t show there’s an update needing to happen.

If you’re going to have the process use the public package for metadata you can get away with filling in any dummy data and after the first pass it will get updated to the real values. If you want to look at what’s there ahead of time you could use this simple PowerShell to retrieve the package metadata.


Be careful of the version numbers as once you’ve pushed to the feed you cannot delete a package, you can only un-list it or publish a new version with a higher version number. If you use your own version numbering scheme it’s probably not as big a deal but more than likely you’ll want to use the same version number as the public package.

Full .nuspec documentation: https://docs.microsoft.com/en-us/nuget/reference/nuspec

Package building scripts



Part 1 – building the package


This is used in a PowerShell build task and starts off with variables for the public package we’re using as a base and the corresponding file/folder locations in the project.

Next is the version comparison between the latest version of the public package and what is currently in the .nuspec file. The Plug-in Registration Tool is on NuGet so we use NuGet.org as a source. If you’re using a Chocolatey packages as the source then you’ll need to use Chocolatey.org as the source. Using Azure DevOps, NuGet is already registered as a source but Chocolatey is not in which case you’ll need to use Register-PackageSource and target Chocolatey.org (example). Once the comparison is made if the version hasn’t increased then the process stops. Before doing so a Pipeline Variable is set so we can use it to prevent any future build tasks from running. The other thing of note is the <files> section which determines what should be included when the package command is run. In this case we want just the tools folder and the .zip file containing the actual content and we’ll be ignoring the readme file and this PowerShell script file.

Once it’s been determined an update is needed, the public package is downloaded. As part of the process I’m giving it common name that corresponds to a value in the install file. This is solely to have one less thing to change when reusing this code.
From the package used in the version comparison I’m updating the .nuspec in my package. This isn’t a necessity, just more for information purposes. I’m also setting a Pipeline Variable here with the new version number so I can use it in a later build task. 

At this point the only thing left is to run the Chocolatey command to create the package from the content of the current directory based on what is defined in the .nuspec file. Chocolatey is installed by default on the Azure DevOps hosted build agents but it you’re running this from anyplace else you’ll need to run the PowerShell command to install it first.

Part 2 – publishing the package


This is used with a NuGet build task to upload the completed package to the feed which is going to host it. Specific to Azure DevOps I’m using a Personal Access Token that has read/write access to Packages to authenticate since the feed isn’t public. Also note that I used the NuGet v2 endpoint as opposed to the newer v3 endpoint. The format looks like this:

https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v2

Here’s where you’ll run into a 409 error uploading the package if you try to upload a package with a version number that already exists. I’ll cover more about the feed set up in part 3 of this blog.

Part 3 – Committing the changes


This is used in a separate PowerShell build task run after publishing. Since I’ve made modifications to the .nuspec file they’ll need to be committed back to source control so the next time this runs the package process won’t run unnecessarily after a version comparison. As part of the comments I’m using the Pipeline Variable I set earlier with the new version number we’re updating to. In the Azure DevOps build you also need to enable to option Allow scripts to access the OAuth token so that we can pass the System.AccessToken variable in the request header to authenticate.

Installing & uninstalling


In this case these scripts are very simple. After a user chooses to install a package it’s downloaded to the local machine, at which point  chocolateyInstall.ps1 kicks off. The SDK tools aren’t installable applications so “installation” is just a matter of copying the files somewhere. I chose the user’s Application Data folder and am creating a folder structure that will support installing tools from multiple sources. The Chocolatey unzip command will handle extraction and creation of any folders in the destination path that may be required. This is just unzipping my renamed package file I downloaded from the original source. To complete things I using the Chocolatey create shortcut command to make a short cut on the user’s desktop to the folder containing the executables.

When things get installed, Chocolatey is keeping the original package around so it knows what is on the machine to determine if updates are available, do reinstalls, and to access the uninstall script. The location will be similar to: C:\ProgramData\chocolatey\lib\YourPackage

As you might have guessed chocolateyUninstall.ps1 will run when the user chooses to uninstall the package. Since it’s not an installed application I’m just deleting the things I created during the install and deleting the package from the lib folder. There are a number of different Chocolatey commands you could use to uninstall but at the time when I worked on this I couldn’t get the one I believe I was supposed to use to work so I went down this path. For the full list of Chocolatey commands, check out their documentation.

That covers the package creation process. Part 3 will look at the setting up Azure Artifacts & Build Pipeline to keep things continuously updated as well as how to use the new feed in the Chocolatey clients. In case you missed in, part 1 gave an overview of Chocolatey and why you might want to use it in the Dynamics 365 space.

I’ve got everything up on GitHub so you can refer to that incase I don’t end up explaining something clearly enough

https://github.com/jlattimer/D365Chocolatey


Visual Studio 2017 - Dynamics 365 SDK cant connect to server

$
0
0

1. I downloaded the Developer extensions and the developer toolkit. (vsix for dynamics)

2. Updated the manifest with the version 15 instead of the 14.

3. Got both installations completed 

4. Instead of map the folders installed I mapped the path to the most recent version of dlls.

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/download-tools-nuget

And now I cant make the connection. When I tried to start a new project I get the window that asks if you want to reconnect to crm and it shows my server: organization: and user: do you want to use this connection I click yes and then I get this error:

Using Chocolatey to Distribute Developer Tools - Part 3

$
0
0
This is going to focus on setting up Azure DevOps to host the NuGet package feed and keeping the packages up to date. In case you missed the other posts in this series, part 1 provided an overview of Chocolatey and why you might want to use it as a D365 developer and part 2 focused on the package content and the scripts used to automate creation.

Feed setup


In your chosen Azure DevOps project head to Artifacts. If you don’t see that as an option, make sure under Project Settings that Artifacts are turned on. From there it’s just a matter of thinking up a name for the new feed.  Once created, use the Connect to Feed option to retrieve the url. We need this for the NuGet build task to push the package once it’s created as well as connecting from the client. The v3 endpoint url will be displayed here but I’m using the v2 endpoint because it was the only way I could get it to work. It’s straight forward to convert from v3 to the v2 url.

v3: https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v3/index.json

v2: https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v2

If you haven’t created a Personal Access Token (PAT) that has read/write access to Packages yet, do so now.

Pipeline setup


I created a different build for each package in the project.
The source will be the git repository hosting the project. This way the first thing that will happen is the project code will be downloaded to the build agent. We’ll need to reference the files for building the package around whatever we download as well as to check the current version we last built versus the current public version.


Step 1: PowerShell Script: Build package


You can inline the code from #1 in a PowerShell build task. If it’s determined a new package is available, the script will download the latest version and build the Chocolatey package from it.

Step 2: NuGet: Push to feed


Originally I want to make everything into a single PowerShell script and just use that but I wasn’t able to get the authentication working using the ApiKey parameter along with a Personal Access Token. Luckily using the existing NuGet build task worked fine. Use the command from #2 and select custom as the type. Additionally we don’t want this step to run if a new package wasn’t built, so to prevent that expand Control Options on the task and then Custom Conditions. Then use this to check the Pipeline Variable created in step #1 to determine if the task should run or not with this snippet.

eq(variables['ContinueUpdate'], 'true')


Step 3: PowerShell Script: Commit Updates


Inline the code from #3 in another PowerShell task to make sure your updated .nuspec file gets back into source control so it can be used next time the process runs. You’ll also want to make sure that your .gitignore file excludes .zip and .nupkg files so we don’t accidently store redundant copies of those. Don’t forget to add the same Control Option change as step 2.

Step 4: Build Triggers


The last thing is to set up a schedule so it keeps itself updated, this can be done from the Triggers tab inside the build. I’m using the free hosted agent and it doesn’t allow parallel builds so I staggered the start times on the day of the week I have this running on. You could very well run them all at the same time and they’d just queue up and run one at a time anyway.

Done!

Chocolatey GUI


Here’s what you end up with after all this work. Click install and you’ll have the latest version of the tool installed in a few seconds. Click uninstall and with any luck (if you cleaned up after yourself properly) everything will be gone.





To connect to Azure DevOps go to Settings and then Sources.
  • Id: Display name for the feed
  • Source: The v2 Azure DevOps feed url
  • Username: Anything
  • Password: The Personal Access Token created earlier (or a different one with Read access to Packages). Using the normal username / password combination wasn’t working here.

A few other notes


The download counts, package size, or package image don’t get displayed. That seems to be an issue between Chocolatey GUI and Azure DevOps. Setting up a feed from a different private NuGet source didn’t do this.

Download all the code here: https://github.com/jlattimer/D365Chocolatey

Connect Dynamics CRM to Asp.Net Core 2.0

$
0
0

Can anyone refer me to documentation that shows how to connect an Asp.Net Core App to CRM online?

Things I found based on my research:

  1.  Web Api seems to be a logical way to get/query data from CRM.
  2. I haven't found a detailed document on how to get CRM to connect to Asp.Net Core application.

Thank you in advance.

Call an external Web Service from Plugin - Dynamics crm ONLINE

$
0
0

Hi all.

I'm facing a Permission error when i try to call an external WebService from my plugin.

Notes:
1. The web service was developed by another company and they said that they could not put a friendly name (dns) for this url
2. The web service is on IP format (http://200.279.18......).

So, for a temporary solution, i did a Windows Scheduler console APP that runs every 1min calling the Web Service and update the Quote entity. But i need this web service executing on real time.

Any help?

Run Your Automated D365 CE UI Tests From Azure DevOps

$
0
0

EasyRepro

If you hadn't heard about it already, EasyRepro is a UI testing framework for Dynamics 365 CE built on top of Selenium which is one of the more popular UI testing frameworks available today. Those who might have tried using Selenium directly with D365 CE in the past only found pain and suffering. For most people that ultimately lead to the conclusion that it wasn't worth investing huge amounts of time creating tests for little upfront return. In my opinion EasyRepro now makes creating UI tests feasible as it abstracts away most of complexities involved with Selenium development and boils down CE interactions to a single line of code in many cases. At the same time it’s still flexible enough to extend or use underlying core Selenium functionality. If you’re at all interested, download the Git repository and give it a test drive.

Running Tests in Azure DevOps

Once you've cloned the EasyRepro project and ran some of the examples on your local machine you'll quickly realize that you can't tie it up all day running tests. So if you’re planning on running dozens or hundreds of these tests you’ll need to find an alternative place to run them from. Azure DevOps (ADO) can be that alternative because it offers solutions for 2 different scenarios when it comes to running automated tests. These being running a group of tests as part of a build & release pipeline and the other being able to have a non-developers run individual tests on demand.

Project Set Up

For example purposes I'll use a Visual Studio test project referencing the Dynamics365.UIAutomation.Api (EasyRepro) libraries from NuGet. I've additionally added in a reference to Microsoft.Azure.KeyVault to handle credentials so we don't need to worry them getting into source control or having to worry about replacing them in any of the ADO processes. For this example we’ll just be using Google Chrome as the browser of choice for testing.

Build Set Up

Assuming tests are created and things are working locally, get the project checked into ADO so we can set up a build. It's going to work like a normal .NET project build.


Start off by creating a new build in ADO using an empty job. I’ve got a repository with the EasyRepro test project that I’m pulling the source from initially. This is executing on a private build agent (running inside a Window container) so I’m just using the command line to do a NuGet restore but you could also use the standard NuGet build task. Build the solution or project like normal using the release configuration. Then probably the most important step, use the Publish Build Artifacts build task to publish the bin/release folder to Azure Pipelines/TFS. This is what makes the compiled assembly and other references available to the release we’ll be setting up to run this.

To keep things updated, under Triggers, check Enable continuous integration so that the project gets rebuilt and published each time an update is made.

Release Set Up

In order for ADO to be able to run tests on demand we need to create a release pipeline. Again don’t start with an existing template, just pick the empty job. Select Artifacts & Add and then choose Build and use your project and the build pipeline you just created as the source. You have a few different options for the default version to use but I’d probably set it to use the latest each time.


In the first and only stage, open the tasks to begin setting up the test run. I start with a Visual Studio Test Platform Installer task. You might need this if you’re running a private build agent. If you’re using a Microsoft hosted agent you shouldn’t need this because it’s already installed but it being there won’t hurt anything. Then add a Visual Studio Test task and user version 2.*. Pay close attention to the configuration. Select tests using aTest run. Make sure the box is checked that indicated the Test mix contains UI tests. Last, make sure the test platform version is using the version Installed by Tools Installer.




Depending on the build agent, you may or may not need to install Chrome (or other browsers for that matter). If things aren’t working, try installing it with the following PowerShell script:

$Path = $env:TEMP;
$Installer = "chrome_installer.exe";
Invoke-WebRequest "http://dl.google.com/chrome/install/375.126/chrome_installer.exe" -OutFile $Path\$Installer;
Start-Process -FilePath $Path\$Installer -Args "/silent /install" -Verb RunAs -Wait;
Remove-Item $Path\$Installer

Test Plan Set Up For Manual Test Execution

Hooking individual tests up to test cases is only required if you want the ability to run them on demand from the Test Plans area in ADO. It works better if you can link the release that was created to the test plan but I’m pretty sure you’ll need either VS Enterprise, Test Professional, MSDN or Test Manager to do so. If you’ve already got a test plan, right click on in and go to Test plan settings. Select the build that was just created and leave the build number at latest. Then choose the pipeline and stage from the release setup.


Assuming you’ve got a Test Case created that matches one of the EasyRepro tests, head back to the project in Visual Studio, open up Test Explorer, and find the test. Right-click on it and choose Associate to Test Case. It’s probably a good idea to make sure you’re connected to the project in ADO before you do this.



Type in the Test Case Id then Add Association. Save and close and when you open the test case in ADO, the automation status should now say ‘Automated’. Repeat for any other tests.

Under Test Plans you should be able to select individual or groups of tests and then under the green Run button pick Run for web application to execute. Having the release tied directly to the test plan saves the user having to choose that same data each time which is a few extra clicks and possible confusion.


A window will open confirming and validating the tests. If there aren’t any errors you can select View test run and sit back and wait for the tests to complete. With any luck they will, otherwise you’ll need to retrace your steps and figure out what’s wrong.



If you aren’t able to edit the test plan you can use Run with options and it will open a window where you can choose the items that got configured when linking at the test plan leave. When selecting the build the user will need to either know the build number or know enough to select find and the pick the correct (likely the most recent) build and pipeline from the list. 

Running In Builds & Releases Automatically

Really it’s just a matter of combining the all the steps from the Build and Release setups above with the exception of the publish build artifact task.


And that’s it. Not much to it once you figure it out so hopefully this saves you the trouble.

Dynamics 365 Trial

$
0
0

Hi,

Hopefully someone can help - I have registered for a Dynamics 365 trial version - I was hoping to explore creating workflows etc - but I am not an administrator so can't do this.  When I try to update my user it fails.  Can anyone help.

thanks

Steven

How to extend the timeout on preview when hovering over fields in IE11?

$
0
0

Is there any way to extend the period a field's contents (ie not Label/Description) are displayed on mouseover please? We use CRM 2013 on premises via IE 11. Currently the preview fades after around 5 seconds. I would like to extend this to 30 seconds or even better make it unlimited. I note it is unlimited via Chrome 61. Thanks.


Log Azure DevOps Build Times to Application Insights

$
0
0
I’ve been working a lot with Azure DevOps recently to automate Dynamics 365 CE deployments. One of the first things I noticed was that using the hosted build agents resulted in slower builds than I was used to, comparatively speaking anyway to performing the same operations on your local machine. It’s not surprising given that for each build the agent is spun up and torn down and nothing can be re-used from previous runs. Whereas on your local machine NuGet packages are already downloaded, any build tools are already installed, etc. so naturally it’s way faster. Building a simple plug-in and running a few unit tests was taking close to 2 minutes. Which is fine I guess considering rarely would anyone be staring at it waiting for it to finish. The thought though was that once you started building real projects with lots of unit tests the build could end up taking quite a long time to finish, potentially backing up other processes. I figured it would be a good idea to keep an eye on how long they’re taking by showing the average time for a build on a dashboard in hopes that it would draw attention if the times started getting long without anyone really noticing. I checked the Visual Studio Marketplace but didn’t find any existing tools to handle this so I thought I’d put one together using Application Insights to store the data. There was already a free dashboard widget available which could surface that data so I figured it would be the easiest route to take.

I wanted to log the time at the end of the build (knowing the results will be off a few seconds) as opposed to relying on some other process like a Web Hook / Azure Function combination. I decided to use a PowerShell task as the last step of the build to do the logging.

Have a look.


Here’s a quick rundown on what the code is doing.

Initial portion if calculating the number of minutes based on when the build started and when the code ran. I’m using one of the predefined variables called System.PipelineStartTime to establish the start. I’m not actually sure that’s documented anyplace as far as I can tell but if you were to do a dump of all the system variables it shows up. The number of minutes you end up with is relatively close to what is reported if you looked at the completed build in the UI, give or take 6-10 seconds. I figured that was an acceptable margin for error since you’d probably never look to optimize a build if you were only going to gain a few seconds.

You’ll need to provide the Instrumentation Key for your Application Insights instance so there’s a basic check to make sure that was provided. Of course store that in a Variable Group or at a Build Variable instead of hard coding to make changing it easy. There’s also the side benefit of hoping that nobody trolls my GitHub account because I had example code with hard coded credentials to simplify things for those people who are still learning. :)

I chose to use the Metric type in Application Insights to record the data being that it’s numeric in nature. You can see the JSON body of the request that is going to be made, most of it is the boilerplate request. There are two places where you can make changes to the data being sent. Under ‘metrics’ the ‘name’ can be changed to anything you’d like. I used “Build Time” to describe the type of metric being recorded and the the build definition name so I knew which build it came from. You could make it anything but be aware that this is used for aggregation so you won’t want any dynamic values here. The other place is under ‘properties’. Those can be any key/value pairs you want, so if you want to capture detail about the build agent, values from the predefined build variables, or something else entirely you can put it here and you’ll be able to query on it in Application Insights.

Finally is gives a check to see if the request was successful. It certainly doesn’t hurt to be verbose with the logging. Trying to debug failed builds when you have to wait 2+ minutes between runs is painful so do yourself a favor and do whatever you can to speed that process up.

Another quick note, I’m using 2 different formats when referencing variables and that is on purpose. The majority are in the format like $(Build.DefinitionName) but the instance where where the pipeline start time is used it’s in the format like $Env:System_PipelineStartTime. The reason being is that I have this script in a Task Group (which basically combines multiple steps into 1 that can be reused) and by default it will scan through all the variables you’re using and turn them into parameters. In this case I want the value to be retrieved at run time instead. I could have jumped though a few hoops and exported and reimported the task group after editing by hand but I figured out that just using $Env: instead causes it to be skipped when parameters are gathered.

In a future post I’ll walk through getting the information on an Azure DevOps dashboard.

Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 1

$
0
0

What was I hoping to accomplish by doing this?


Working with Azure DevOps build and release pipelines can be a slow and tedious process, even more so using the hosted agents that are provided. I find it hard to keep track of filenames and the folders they end up in when I’ve got to keep in all in my head because it only exists for a short time and then is gone. It’s further complicated when needing to dynamically create any of these values. It feels a lot like trying to debug a plug-in only using the trace logs. Change code, deploy, execute, wait, review the log, and repeat. The difference is the waiting part is measured in minutes rather than seconds.

The goal was to create a build server that could be used to build and test things developed for Dynamics 365 CE. So that means being able to build and test .NET based plug-ins / workflows, JavaScript / TypeScript, run EasyRepro / Selenium UI tests, and be able to deploy as needed. All that, plus be faster because I’m impatient.

Containers at a high level


Prior to a few weeks ago I hadn’t put much thought into the concept of containers since the majority of Dynamics 365 CE development is done right inside the application. When scenarios came up where a plug-in wouldn’t work that functionality ended up on an existing VM that ran integration jobs or turned into an Azure Function. Functions and VMs really are very different from one another if you think about it. An Azure Function is cheap, easy to set up, and can scale but lacks flexibility when it comes to getting at lower level functionality that isn’t exposed or being able to install additional components. A virtual machine is usually expensive to run, requires constant maintenance, and is slow to start up but provides the ability to use a wider and more complex array of software.

In my eyes a container falls nicely in between. Using Azure to run the container you’ll end up paying for storage space for the images which will certainly be more than a Function but probably not more than a VM. A Function and a VM both bill based on compute time. The big different is that when a Function isn’t actually processing something it’s shut off and not adding to the bill. A VM on the other hand is accruing compute time as long as it’s turned on, whether it’s doing work or not. The pricing model for a container is closer to that of a VM but the rates appear to be cheaper and costs are calculated per second as opposed to per hour. Turning things on and off to reduce costs is more suited to containers as they can often be up and running in a few seconds while a VM could easily take a minute or more to full start up and get itself into a state where application can run.
To get an idea of the costs here’s what this is costing to run:
Roughly $114 / month if you left it running 24/7. If you turn it off when not in use then you’ll see the Container Instance costs drop.

Management is easier using a container versus a VM. On the VM there is the worry about patching and all the possible ways someone could hack in because of the various services running, open ports, etc. Windows based containers don’t run a full blow copy of the OS but rather a scaled down version (Nano Server or Windows Server Core) based on a specific build of the full OS. Less features, less chance for someone to exploit something. The other point is that these operating systems aren’t made to be patched in the traditional sense of running Windows Update. When it’s time to update you’re basically installing whatever components again from scratch on top of a new version of the OS image. Sounds painful but it’s really not once you’ve got the scripting in place (but up until that point it is very painful).

For more on containers: https://www.docker.com/resources/what-container


Plug-in compile & unit test build time comparison


I’m going to skip over the container build for the moment (covered in part 2) and go right to the end to show what kind of difference using a container made. The test case I used was compiling a bare bones plug-in and running a single unit test. As you can see from the time breakdown I think I managed to achieve what I was looking for.

Hosted VS2017 Agent
TaskTime
Queue time1s
Prepare job<1s
Initialize agent<1s
Initialize job7s
Checkout14s
NuGet Restore 1m 7s
MSBuild - Build Solution54s
Visual Studio Test Platform Installer8s
VsTest – Execute Unit Tests35s
Publish – Test Results5s
Post-job: Checkout<1s
Report build status<1s
Total3m 14s

Private Agent Azure Container Service
TaskTime
Queue time1s
Prepare job<1s
Initialize agentN/A
Initialize job<1s
Checkout3s
Command Line Script - NuGet Restore4s
MSBuild - Build Solution8s
Visual Studio Test Platform Installer2s
VsTest – Execute Unit Tests14s
Publish – Test Results4s
Post-job: Checkout<1s
Report build status<1s
Total 38s

So what are the differences?


Queue Time
Both were 1 second when only running 1 build at a time. Each agent can only run 1 job at a time by default without getting into parallel builds, multiple agents, etc. When you start lining up multiple builds back-to-back the queue times on the hosted agent are going to be considerably longer.

Initialize Agent
Not applicable for privately hosted agents.

NuGet Restore
These packages needed to be restored for the test I ran:
  • FakeItEasy
  • FakeXrmEasy.9 (@jordimontana)
  • Microsoft.CrmSdk.CoreAssemblies
  • Microsoft.CrmSdk.Deployment
  • Microsoft.CrmSdk.Workflow
  • Microsoft.CrmSdk.XrmTooling.CoreAssembly
  • Microsoft.IdentityModel.Clients.ActiveDirectory
  • MSTest.TestAdapter
  • MSTest.TestFramework
On the Microsoft hosted agent, NuGet.exe is already installed. Using the NuGet build task, before it attempts to download any packages it first needs spend a couple seconds registering NuGet.org as a package provider. Then it downloads and installs all the packages because nothing is cached. This was particularly long on the build I’m using for comparison at 1 minute 7 seconds but the faster times still were taking in the neighborhood of 40 seconds.

On the container I pre-installed NuGet.exe so instead of using the NuGet build task I used a Command Line Script task and executed something like:

"C:\Program Files\NuGet\nuget.exe" restore $(Build.SourcesDirectory)\TestPlugins.sln -Verbosity Detailed –Noninteractive

After the first run of this build, all those packages were cached locally and available so it took only 4 seconds.

MSBuild - Build Solution
I couldn’t find anything referencing the specifications for the hosted servers. The Azure Container Instance had 2 vCPUs and 2 GB of memory. I suspect that’s more than gets assigned to the hosted agents and as a result the build time is considerably faster.

Visual Studio Test Platform Installer
This is an out of the box build task which installs VSTest.Console.exe needed to run .NET unit tests. In hindsight this step probably wasn’t needed on the hosted agent since it’s already installed by default.

I spent a fair amount of time trying to get this installed on the container image without success. Again in hindsight it would have been easier to install a full copy a Visual Studio 2017 (which would have included this) instead of trying to install the bare minimum number of components I thought I’d need for a capable D365 build & test server. The flip side though is the container image becomes larger, more costly, and more cumbersome to deal with. The bright side is that once it’s installed it’s available for future use without re-downloading and re-installing. The build task is smart like that and first checks if it’s there before blindly installing. That 2 seconds was just to check if it was installed. The bigger reason I wanted to get it installed was to simplify and reduce the number of steps a person would need to go though to create a build. It’s just one more thing for someone new coming in to forget and have to waste time on because the tests won’t run.

VsTest – Execute Unit Tests
I again attribute the difference to the virtual hardware specs likely being better.

Part 2 will cover what went into the creating the container.

Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 2

$
0
0
In Part 1 I covered the reasoning for choosing a container and the performance gains that resulted in the build process.

Creating the container image

I’m not the first person to think of using a container for an Azure DevOps build agent by any means. There are a number of people who’ve blogged about it and you can get pre-configured, Microsoft created Azure DevOps agent Dockerfiles from Docker Hub. The problem with what’s on Docker Hub and most of the blogs is that most everything is Linux. Fine if you want to build .NET Core projects but Dynamics 365 CE needs the regular .NET Framework which isn’t going to run on Linux. From what I could see, Microsoft doesn’t have a Windows based agent on Docker Hub so I set off to build my own.

Remember the goal was to create a build server that could be used to build and test things developed for Dynamics 365 CE. So that means being able to build and test .NET based plug-ins / workflows, JavaScript / TypeScript, run headless EasyRepro / Selenium UI tests, and be able to deploy as needed. Here’s what I came up with. I’m sure it’s probably not the best or cleanest way of going about this, but it works.

Dockerfile



start.cmd



start.ps1



Dockerfile


Image Creation Basics
As you can see, to get software above and beyond what comes on the base OS image you need to write PowerShell or DOS commands to obtain and install.

When looking through the script and you see this, that means the follow commands are executed in PowerShell.

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]

When you see this, that means the follow commands are executed at the Windows command prompt.

SHELL ["cmd", "/S", "/C"]

Everything that get executed starts off with RUN (environment variables are ENV) and you’ll notice that multiple commands are chained together to execute in one batch. The reasoning for this is due to the way Docker assembles or layers the images. Each RUN command creates a new layer on top of the OS with the changes made. Layers are cached for reuse as the image is assembled. The more items you can group together, the less layers you end up with. This can be a good or bad thing.

If there’s a failure building the image on a particular layer, instead of starting over from the beginning (and potentially wasting a lot of time) it will used the cached layers and start again at the one that failed. If the failure happens in the middle of 12 items chained together, it will have to start over at the first item in that layer. If you successfully create an image and then make a change someplace in the middle, it will use the cached images prior to the change but then need to re-perform all the actions on subsequent layers as that change might effect them. As such, it’s better to put the items that will change the least first and then work your way up toward ones that might get versioned more frequently. If I revise this image I’ll probably put more thought into that and reorder some things.
Some of these commands aren’t easy to test and escaping characters combined with environment variables and file paths with spaces in them gets pretty confusing. I ended up doing a lot of trial and error to get some of those combinations correct which was frustrating but it would have been even worse had the image creation needed to start from scratch each time.

Base image
I ended up using this image: microsoft/dotnet-framework:4.7.2-runtime-windowsservercore-ltsc2016

I had originally picked a later build of Windows Server Core but when I went to deploy to the Azure Container Registry (ACR) I found out that it doesn’t support that version of the image so I had to start all over with this image. ACR only supports Long-Term Servicing Channel (LTSC) versions of Windows. LTSC being versions of Windows that will be supported for 5 years and then have 5 additional years of extended support. It should also be mentioned that you don’t need to acquire the base OS for the container if you’re using something from a public repository. If you are using Docker to build the image, it will automatically download the image for you to use. This could be just an OS, or an OS with software already installed. In that case buyer beware I suppose. As far as licensing is concerned, Windows 10 allows you to run containers for development purposes. For production use, the Windows Server license of the host covers containers. I’m assuming then if you’re running the container in Azure, licensing is also covered.

NuGet
Downloads and installs NuGet.exe so we don’t have to use the Azure DevOps NuGet build task later.

Visual Studio Build Tools & Web Targets
This downloads and installs the Visual Studio test agent, base build tools, and the specialized build tool packages for the different things related to Dynamics 365 CE we might want to build. It also removes the install files once complete. You’ll notice it’s installing the test agent but it doesn’t seem to be compatible with what is required for running tests from Azure DevOps. What it does do is ensure the agent reports to Azure DevOps the test agent is installed to satisfy build task demands. If the agent doesn’t show it meets the demands or requirements needed to run the build, it won’t even start. You could skip this and manually edit the list of demands for the agent in Azure DevOps, but the downside is that you’d need to do it for every hostname the container gets. When you stop and start a container it receives a new and different hostname which then shows up separately in the list of build agents. Not a big deal but after awhile you’ll end up with a long list of dead agents that will need to be manually deleted. There’s probably an API someplace for that but the ideal solution would be to figure out a way to set or keep the same hostname each time. That might not be possible considering you can start any number of container instances and having the same name probably wouldn’t fly.

Path Variables
So things like NuGet can be executed from anyplace in the file system.

.NET Targeting Packs
For building against different versions of the .NET Framework.

Build Agent
This installs the build agent software so the container can communicate with Azure DevOps. This url comes from Agent Pools page.

PowerShell Modules
This sets up NuGet.org as a package provider and installs the Azure and Dynamics 365 CE specific PowerShell modules we might need later. Having these pre-installed should prevent the need to import them each time in scripts requiring them.

Suppress Error Dialogs
Not sure this is 100% necessary but these registry entries should prevent Windows error dialogs which might hang something up.

Chocolatey Installs
Downloads and installs Chocolatey which then can be used to download other software packages. In this case Git, Node.js, and the Azure CLI. If you aren’t familiar with Chocolatey it’s package management for applications. Lots of commonly used software can be installed and updated from here via script.

NPM Installs
Grunt and gulp task runners.

Google Chrome Install
I tried installing Chrome via Chocolatey but ended up having problem getting it to work. It’s hard to say if that version was the issue or if something else might have been wrong with the image during the testing. Once installed it appears to be v71 which is the latest while I’m writing this. I’m not 100% sure what’s special about the version that comes from the url I’m using but it seems to be used widely across the internet for silent installs. I should also point out that EasyRepro currently supports v2.38 of the Selenium Chrome driver. That version only lists official support for Chrome v65 – 67. So maybe it’s just luck that it works and this is an unsupported set up. I tried updating the Chrome driver to a version which supports the latest Chrome browser version and then had issues running the tests. Since that’s all kind of a mess I’d recommend just sticking to a known working combination like this.

Start.cmd

Just used to run start.ps1.

Start.ps1

This is used to configure and start the build agent when the container starts. I pulled this from the blog which is noted at the top. This needs your Azure DevOps instance name and a Personal Access Token (PAT) in order to connect to your agent pool. I’m sure there is probably a better way of handling those values other than hard-coding them into this file, I just haven’t had a chance to look for it yet.

Conclusion

I’m glad I took the time to work through setting up a container because I learned a lot. I’m also glad I wrote these 2 blogs, when digging into the how’s and why’s I learned a lot more.

Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 3

$
0
0
Part 1 covered the reasoning for choosing a container and the performance gains that resulted in the build process.

Part 2 reviewed the components that were chosen to be installed to support the build process.

Setup

Maybe a little bit out of order but this is going to focus on getting up and running with containers on Windows. Again there’s almost certainly different ways but this is what I ended up doing.

Download & install Docker Desktop for Windows– you’ll need to create an account on Docker Hub before it will let you download. Select the option to use Windows containers (if you miss it you can switch later) during the install. When that’s done start it up and log in with your Docker ID. When you see it running in the system tray, right-click and see if any of the menu items say “Switch to Windows containers”. If so, select that. Either it doesn’t get set correctly during install or it’s doing something else that I’m unaware of.

If you want to create the image I was using, download the 3 files from Part 2  or get updated copies from the GitHub repo and put them in a folder. Edit start.ps1 with your Azure DevOps instance name and Personal Access Token.

Open a PowerShell prompt as Administrator and navigate to the new folder. 

Building the image

Here’s the command to start building the container. The period at the end is part of the command. The “-t” flag allows you to assign a name of your choosing and a tag like a version number to the image. The “-m” flag here is capping memory use to 2GB. This configuration will used any cached layers in the process to save time.

docker build -t d365agent:v1.1 -m 2GB .

If you don’t want to use layers from the cache for whatever reason you can disable the cache by adding the no-cache option.

docker build –no-cache -t d365agent:v1.1 -m 2GB .

Prior to doing this I’d recommend that if you are on a laptop or have power saving options enabled, you might want to adjust so that your machine doesn’t go to sleep while this is running. That seems to hang the process up and you’ll need to restart from the point at which that happened. If it does happen to stop, you can restart from where you left off by running the same command (with cache enabled). If you make changes prior to restarting, the process will start at the lowest level that was changed.

You should see it start downloading the different parts of the Windows Server image and then starting running through the installation steps defined in the Dockerfile. This process is going to take awhile the first time through since it’s going to download close to 6GB. Once it’s downloaded the base OS image will be cached for future use. Keep in mind as well all the items getting installed need to be downloaded so you might as well grab a coffee and come back because it’s going to be running for awhile.

When that finishes we now have an image that can be used in a container.

Creating and running a local container instance

You can create a new container instance using this image on your local machine using the following command. We’re telling it to automatically restart in case it gets shut down, specifying some names, and finally telling which image to use.

docker run -it -d --restart always --name "d365agent" --hostname "d365agent" d365agent:v1.1

Now that container instance is created you can use the start and stop commands to turn it on or off.

docker start d365agent

docker stop d365agent

If you want to access the filesystem of the running container use this command and you will be dropped into a PowerShell session in the container. This is good for verifying things got installed properly in case the logs during the build process don’t provide enough information.

docker exec -it d365agent powershell

Use the exit command to end the session. Be mindful of which filesystem you’re in, it’s easy to forget at times which you are connected to.

If you’ve provided the correct Azure DevOps information, you should be able to check the available agents and see your container. If you don’t specify an agent pool name in start.ps1, it will go into the default pool. Now when run a build and specify this agent pool it will run on your local machine.


Creating an Azure Container Registry

Next you’ll want to create an Azure Container Registry which will be private by default. There aren’t many options to pick from so I’m not going to cover it. If you want screenshots refer to the quickstart. The only special option I picked was to enable the Admin User. All that does is allow you to use the registry name as the username when publishing images. In reality you maybe wouldn’t want to do that for a production instance seeing as if someone figure out your registry name they would also then have the username leaving only the password to figure out. Unlikely but anything is possible. Otherwise it’s an individual login or service principal.

At this point you can go through the process of setting up GitHub integration and build your image in Azure but I’m just going to upload the one from my local machine that I’ve already created.

Again open a PowerShell prompt as Administrator and navigate to the folder with your Dockerfile.
First you’ll need to log into the Azure Container Registry with this command replacing MyRegistry & AccessKey with the appropriate values from Settings –> Access Keys.

docker login MyRegistry.azurecr.io -u MyRegistry -p AccessKey

Next tag your local image for the Azure Container Registry with this command. It’s pointing at the local image we created earlier and it’s going to be placing it into a repository named “d365/d365agent”. The first “d365” acts as a namespace to categorize different images.

docker tag d365agent:v1.1 MyRegistry.azurecr.io/d365/d365agent:v1.1

Then use this command to upload the image.

docker push MyRegistry.azurecr.io/d365/d365agent:v1.1

It will need to upload all the image layers so again there will be some waiting involved.

Creating an Azure Container Instance

The only thing left to do is create an Azure Container Instance from the image. If you navigate to Repositories you will see “d365/d365agent”. If you drill in further you’ll see the initial version of the container image. From there you’ll have a Run Instance option. Once again the steps are pretty simple but if you want the visuals check out the quickstart.

Make sure you choose Windows as the OS type since it’s not the default. Use at least 2 GB of memory and I’d try for 2 processor cores. You might be able to get away with 1 if you aren’t using EasyRepro. When I tried to run those tests with just 1 core it consistently failed. Take note of which Resource Group you use as you’ll need to go there to manage the container instance once it’s created.

Once that starts you can sit back once again and wait while the container is provisioned.

Once that’s finished you should see something like this in Azure DevOps.

If you aren’t actively doing anything with the container instance I’d shut it down to save money.

Get Latest Solution Patch Name With PowerShell

$
0
0

Someone might find this useful if trying to move solutions around using Azure DevOps & PowerShell. I wanted to automate export of the latest solution patch using the Microsoft.Xrm.Data.Powershell library as part of a build pipeline. It’s pretty straightforward but it requires the solution name, simple for the base solution but when dealing with a patch it’s not possible to predict what the name is going be in order to build it into your script.

I came up with this. Given a solution uniquename this sets build variables for the uniquename & version of the latest patch of a solution or the base solution's uniquename & version if no patches exist.

Viewing all 869 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>