Quantcast
Channel: Jason Lattimer's Groups Activities
Viewing all 869 articles
Browse latest View live

How to get Client ID and Client Secret Key in CRM 2016?

$
0
0

I have created Online Dynamics 365 30 days trial version.

I want Client ID and Client Secret Key to call Web API using OAuth 2.0. How to get them?


Using Chocolatey to Distribute Developer Tools - Part 1

$
0
0
First off, what is Chocolatey? Chocolatey is a package manager for applications. So NuGet for Windows quite literally as the package formats are the same. There’s one or more repositories (again it’s just NuGet) that keep track of all the packages available and then you connect to it using a client and download and install an application. It’s similar to NuGet in the sense that the community can contribute packages but there appears to be a lesser amount of contributions by “official” sources like Microsoft, Google, Apple, etc. However that doesn’t mean you aren’t able to download and install software from these companies from Chocolatey. Applications like Visual Studio, Chrome, and iTunes have all been download, most likely from a publicly available link, and converted into a package by someone in the community. I should call out right away that this isn’t meant to circumvent licensing or pirate anything. The original applications aren’t being modified in any way, just having some scripting added around it to automate installations. Since you’re basically relying on “some guy in his basement” to create these packages there is some amount of risk still that you could download something harmful. Granted the packages are virus scanned and community moderated but things can still slip through. No different than downloading something off NuGet, GitHub, or even the XrmToolBox for that matter. This just happens to be one of many approaches you could take if you are in a position to assert more control over the things your users / developers are downloading and installing. Making commonly used software easily available for people to install themselves lessens the need for them to search around the internet and potentially downloading something bad. Not to mention, self-service software installation can take some of the burden off the support team and free them up to do or things.

What goes into a package?


At the core there is a XML file which contains the metadata about the package, things like title, version, description, licensing info, dependencies, etc. can all be found here. This will be the source of information everyone sees in the download feeds. Instead of DLLs like a typical NuGet package, you’ll usually have an executable file of some sort or the files needed to run the application if an actual installation isn’t required. Around this is wrapped some PowerShell code which takes the downloaded content and performs the tasks required to get the application to a useable state. Ideally there should be no interaction from the user as one of the big selling points of Chocolatey is for system administrators to use it to silently install and manage software across a large number of machines. Each package is versioned by the repository so you can always go back to a previous version if need be and just like NuGet packages are immutable. Once it’s published that’s it, no changing it without increasing the version number. It can be painful if you’re developing packages but for consumers it can prevent the old bait and switch by tricking people into downloading something bad which once worked perfectly fine. Chocolatey will also keep track of what is installed on your machine and make updates available as they as released. Additional PowerShell can be put in place to run during the uninstall process so package creators can add any code required to assist with any cleanup that might need to happen on top of the application’s normal removal process.

But I’m a Dynamics person and not a system administrator


True, this is maybe geared a little more toward organizations rather than individuals but that doesn’t mean it couldn’t be put to use for personal use. For developers, just think about the next time you need to rebuild you primary development machine. Hunting down all the installs and clicking through everything is about a day long process, at least it is when I need to do it.

This is my example use case – creating a way to more easily install the Dynamics 365 CE SDK tools (Plug-in Registration Tool, Package Deployer, etc. ). Back when there was a single download for the SDK it was easy because all the tools were right there. I can understand why managing that was probably difficult and going to a solely online only based SDK was the way to go but it also introduced some new challenges making these tools available. The new process is documented on how to download them from NuGet using PowerShell but I’ll still say it would have been easier just to point people to the manual download link and then rename .nupkg to .zip and be done with it. Imagine this situation, you’re dealing with someone who isn’t really a developer but has just enough knowledge of the platform to know how plug-ins work and you’re trying to describe to them over the phone how to edit a plug-in step. You’ll end up asking them if the have the Plug-in Registration Tool installed and of course they’ll say “no” so then you start telling them them need to download it from NuGet and they’ll usually end up responding something to the effect of, “WTH is nugget?”. So then you just ask for remote access and do it yourself.

Getting started


Install Chocolatey
https://chocolatey.org/install
Now you’re probably thinking that this is all command line stuff that you don’t want to memorize just to make it “easier” to install the latest version of an application. Not to worry, there is a UI that can be installed to make things a little more friendly.

Install Chocolatey UI
https://chocolatey.org/packages/ChocolateyGUI
At this stage you should have Chocolatey up and running and can download packages the community has already made available.

Creating and hosting your own packages


You could go through the process of creating a package and then upload it to the main Chocolaty site but obviously if you’re creating something solely for personal or private use, public hosting it isn’t an option. The other thing I’d throw out there is that it might be a violation of some terms of service or other legalese somewhere that forbids redistributing copyrighted material in this way, so if you don’t want “the lawyers” giving you a call you better look for another alternative.

Remember way back in the first paragraph where I mentioned that Chocolatey and NuGet are basically the same thing? That means there are a number of ways to create your own private NuGet server. The software itself is open source so it if want to stand up your own copy and manage the hosting that is an option. There are also some paid services / products which may work for you. ProGet offers a commercial product that you run on your own server (which is a pretty good deal if you’ve got a lot of users and need AD integration) or MyGet which is a cloud based offering that has paid and free plans.

Instead of those I’m going to focus on using Azure DevOps to host packages & feeds and to automate the package maintenance process. Azure DevOps is free up to 5 users or included with certain MSDN subscriptions. Package management (Azure Artifacts) is a paid add-on which is available with the same free access and then charges per user, per month after that which last time I checked starts to get pricey for when you’ve got a lot of developers.

In part 2 I’ll go into the project & package set up and the code used to keep the packages up to date. As I mentioned earlier there will be examples showing how to convert the SDK Tools into packages as well as how you can replicate packages from other feeds (like Chocolatey.org) for use in your own curated feed. Part 3 will covert how to create Artifacts in Azure DevOps and how you can use a Build Pipeline to watch for new versions and automatically update the packages. That last part is probably the most important. Creating an installable package to only use it once is a waste of time. The real benefits come from installing or updating multiple times on multiple machines.

Using Chocolatey to Distribute Developer Tools - Part 2

$
0
0
This is going to focus on the Chocolatey package set up and the code used to build updated packages.

Project structure


A single git repository which contains folders for each package / application / tool. 
Each package folder contains the following:
  • .nuspec file which provides the metadata about the package
  • PowerShell file containing the scripts that will go into the Azure DevOps Build Tasks to:
    1. Build the updated package
    2. Push to the NuGet feed
    3. Commit the updated files back to source control
  • Tools folder containing 2 PowerShell files
    1. chocolateyInstall.ps1 which handles the installation once it’s on the target machine
    2. chocolateyUninstall.ps1 which handles the uninstallation from the target machine

.nuspec file


You can choose to change the metadata to whatever you’d like but I’m going using what the original packages contains. Later you’ll see in the code used to update the packages, I’m just reading from the downloaded content and updating the matching elements.
The package <id> you’ll want to change to something unique, at the very least a variant of the original. Assuming you’re running the package update process on the same place as your feed (like Azure DevOps) you might run into an issue where it uses your private feed first to retrieve the package as opposed to the public source you actually intended. I’m guessing that’s by design but the issue I ran into popped up in the script comparing the version in the .nuspec in source control versus the package that is publicly available. Since we aren’t specifying a version (because we won’t really know what it will be once this process is automated) it’s pulling back the first copy it finds based on the registered sources. When the compare happens it sees the same version and aborts because it doesn’t show there’s an update needing to happen.

If you’re going to have the process use the public package for metadata you can get away with filling in any dummy data and after the first pass it will get updated to the real values. If you want to look at what’s there ahead of time you could use this simple PowerShell to retrieve the package metadata.


Be careful of the version numbers as once you’ve pushed to the feed you cannot delete a package, you can only un-list it or publish a new version with a higher version number. If you use your own version numbering scheme it’s probably not as big a deal but more than likely you’ll want to use the same version number as the public package.

Full .nuspec documentation: https://docs.microsoft.com/en-us/nuget/reference/nuspec

Package building scripts



Part 1 – building the package


This is used in a PowerShell build task and starts off with variables for the public package we’re using as a base and the corresponding file/folder locations in the project.

Next is the version comparison between the latest version of the public package and what is currently in the .nuspec file. The Plug-in Registration Tool is on NuGet so we use NuGet.org as a source. If you’re using a Chocolatey packages as the source then you’ll need to use Chocolatey.org as the source. Using Azure DevOps, NuGet is already registered as a source but Chocolatey is not in which case you’ll need to use Register-PackageSource and target Chocolatey.org (example). Once the comparison is made if the version hasn’t increased then the process stops. Before doing so a Pipeline Variable is set so we can use it to prevent any future build tasks from running. The other thing of note is the <files> section which determines what should be included when the package command is run. In this case we want just the tools folder and the .zip file containing the actual content and we’ll be ignoring the readme file and this PowerShell script file.

Once it’s been determined an update is needed, the public package is downloaded. As part of the process I’m giving it common name that corresponds to a value in the install file. This is solely to have one less thing to change when reusing this code.
From the package used in the version comparison I’m updating the .nuspec in my package. This isn’t a necessity, just more for information purposes. I’m also setting a Pipeline Variable here with the new version number so I can use it in a later build task. 

At this point the only thing left is to run the Chocolatey command to create the package from the content of the current directory based on what is defined in the .nuspec file. Chocolatey is installed by default on the Azure DevOps hosted build agents but it you’re running this from anyplace else you’ll need to run the PowerShell command to install it first.

Part 2 – publishing the package


This is used with a NuGet build task to upload the completed package to the feed which is going to host it. Specific to Azure DevOps I’m using a Personal Access Token that has read/write access to Packages to authenticate since the feed isn’t public. Also note that I used the NuGet v2 endpoint as opposed to the newer v3 endpoint. The format looks like this:

https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v2

Here’s where you’ll run into a 409 error uploading the package if you try to upload a package with a version number that already exists. I’ll cover more about the feed set up in part 3 of this blog.

Part 3 – Committing the changes


This is used in a separate PowerShell build task run after publishing. Since I’ve made modifications to the .nuspec file they’ll need to be committed back to source control so the next time this runs the package process won’t run unnecessarily after a version comparison. As part of the comments I’m using the Pipeline Variable I set earlier with the new version number we’re updating to. In the Azure DevOps build you also need to enable to option Allow scripts to access the OAuth token so that we can pass the System.AccessToken variable in the request header to authenticate.

Installing & uninstalling


In this case these scripts are very simple. After a user chooses to install a package it’s downloaded to the local machine, at which point  chocolateyInstall.ps1 kicks off. The SDK tools aren’t installable applications so “installation” is just a matter of copying the files somewhere. I chose the user’s Application Data folder and am creating a folder structure that will support installing tools from multiple sources. The Chocolatey unzip command will handle extraction and creation of any folders in the destination path that may be required. This is just unzipping my renamed package file I downloaded from the original source. To complete things I using the Chocolatey create shortcut command to make a short cut on the user’s desktop to the folder containing the executables.

When things get installed, Chocolatey is keeping the original package around so it knows what is on the machine to determine if updates are available, do reinstalls, and to access the uninstall script. The location will be similar to: C:\ProgramData\chocolatey\lib\YourPackage

As you might have guessed chocolateyUninstall.ps1 will run when the user chooses to uninstall the package. Since it’s not an installed application I’m just deleting the things I created during the install and deleting the package from the lib folder. There are a number of different Chocolatey commands you could use to uninstall but at the time when I worked on this I couldn’t get the one I believe I was supposed to use to work so I went down this path. For the full list of Chocolatey commands, check out their documentation.

That covers the package creation process. Part 3 will look at the setting up Azure Artifacts & Build Pipeline to keep things continuously updated as well as how to use the new feed in the Chocolatey clients. In case you missed in, part 1 gave an overview of Chocolatey and why you might want to use it in the Dynamics 365 space.

I’ve got everything up on GitHub so you can refer to that incase I don’t end up explaining something clearly enough

https://github.com/jlattimer/D365Chocolatey

Visual Studio 2017 - Dynamics 365 SDK cant connect to server

$
0
0

1. I downloaded the Developer extensions and the developer toolkit. (vsix for dynamics)

2. Updated the manifest with the version 15 instead of the 14.

3. Got both installations completed 

4. Instead of map the folders installed I mapped the path to the most recent version of dlls.

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/download-tools-nuget

And now I cant make the connection. When I tried to start a new project I get the window that asks if you want to reconnect to crm and it shows my server: organization: and user: do you want to use this connection I click yes and then I get this error:

Using Chocolatey to Distribute Developer Tools - Part 3

$
0
0
This is going to focus on setting up Azure DevOps to host the NuGet package feed and keeping the packages up to date. In case you missed the other posts in this series, part 1 provided an overview of Chocolatey and why you might want to use it as a D365 developer and part 2 focused on the package content and the scripts used to automate creation.

Feed setup


In your chosen Azure DevOps project head to Artifacts. If you don’t see that as an option, make sure under Project Settings that Artifacts are turned on. From there it’s just a matter of thinking up a name for the new feed.  Once created, use the Connect to Feed option to retrieve the url. We need this for the NuGet build task to push the package once it’s created as well as connecting from the client. The v3 endpoint url will be displayed here but I’m using the v2 endpoint because it was the only way I could get it to work. It’s straight forward to convert from v3 to the v2 url.

v3: https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v3/index.json

v2: https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v2

If you haven’t created a Personal Access Token (PAT) that has read/write access to Packages yet, do so now.

Pipeline setup


I created a different build for each package in the project.
The source will be the git repository hosting the project. This way the first thing that will happen is the project code will be downloaded to the build agent. We’ll need to reference the files for building the package around whatever we download as well as to check the current version we last built versus the current public version.


Step 1: PowerShell Script: Build package


You can inline the code from #1 in a PowerShell build task. If it’s determined a new package is available, the script will download the latest version and build the Chocolatey package from it.

Step 2: NuGet: Push to feed


Originally I want to make everything into a single PowerShell script and just use that but I wasn’t able to get the authentication working using the ApiKey parameter along with a Personal Access Token. Luckily using the existing NuGet build task worked fine. Use the command from #2 and select custom as the type. Additionally we don’t want this step to run if a new package wasn’t built, so to prevent that expand Control Options on the task and then Custom Conditions. Then use this to check the Pipeline Variable created in step #1 to determine if the task should run or not with this snippet.

eq(variables['ContinueUpdate'], 'true')


Step 3: PowerShell Script: Commit Updates


Inline the code from #3 in another PowerShell task to make sure your updated .nuspec file gets back into source control so it can be used next time the process runs. You’ll also want to make sure that your .gitignore file excludes .zip and .nupkg files so we don’t accidently store redundant copies of those. Don’t forget to add the same Control Option change as step 2.

Step 4: Build Triggers


The last thing is to set up a schedule so it keeps itself updated, this can be done from the Triggers tab inside the build. I’m using the free hosted agent and it doesn’t allow parallel builds so I staggered the start times on the day of the week I have this running on. You could very well run them all at the same time and they’d just queue up and run one at a time anyway.

Done!

Chocolatey GUI


Here’s what you end up with after all this work. Click install and you’ll have the latest version of the tool installed in a few seconds. Click uninstall and with any luck (if you cleaned up after yourself properly) everything will be gone.





To connect to Azure DevOps go to Settings and then Sources.
  • Id: Display name for the feed
  • Source: The v2 Azure DevOps feed url
  • Username: Anything
  • Password: The Personal Access Token created earlier (or a different one with Read access to Packages). Using the normal username / password combination wasn’t working here.

A few other notes


The download counts, package size, or package image don’t get displayed. That seems to be an issue between Chocolatey GUI and Azure DevOps. Setting up a feed from a different private NuGet source didn’t do this.

Download all the code here: https://github.com/jlattimer/D365Chocolatey

Connect Dynamics CRM to Asp.Net Core 2.0

$
0
0

Can anyone refer me to documentation that shows how to connect an Asp.Net Core App to CRM online?

Things I found based on my research:

  1.  Web Api seems to be a logical way to get/query data from CRM.
  2. I haven't found a detailed document on how to get CRM to connect to Asp.Net Core application.

Thank you in advance.

CRM Numeric Workflow Utilities v2.3.0.0

Call an external Web Service from Plugin - Dynamics crm ONLINE

$
0
0

Hi all.

I'm facing a Permission error when i try to call an external WebService from my plugin.

Notes:
1. The web service was developed by another company and they said that they could not put a friendly name (dns) for this url
2. The web service is on IP format (http://200.279.18......).

So, for a temporary solution, i did a Windows Scheduler console APP that runs every 1min calling the Web Service and update the Quote entity. But i need this web service executing on real time.

Any help?


Dynamics CRM 2016 (Online) WebAPI Query

$
0
0
I want to make string desc as int desc while using web api query.

ex) string desc -> 2, 10, 1
int desc -> 10, 2, 1



Query : new_projects?$select=new_dt_start,new_txt_project_no&$orderby=new_txt_project_no desc

Is there any way?

Run Automated Tests When Your D365 CE Org Gets Updated

$
0
0
With online D365 CE instances getting updates pushed regularly it makes it almost impossible to ensure that you’ve tested against the latest bits. That’s assuming you even know about each update since the majority go in silently in the background. Realizing that organizations need to test before updates roll out to production or at least be aware of issues caused by updates before users start reporting them, Microsoft has taken the first steps to allow you to get ahead of potential issues.
The Microsoft.Xrm.Tooling.Testing PowerShell library (in preview at time of writing) has been released which provides hooks into a few key backend operations that you hadn’t had access to previously. When one of these events occurs you have the opportunity to run automated tests.

Current backend triggers:
  • OrganizationDatabaseUpdate
  • OrganizationDatabaseUpgrade
  • OrganizationBitsUpdate
  • OrganizationPackageInstall
  • OrganizationPackageUninstall

How does it work?

To start, a developer will need to create a .NET unit test project that will contain any tests that need to be run. The tests really can’t be ‘unit tests’ because by definition unit tests are meant to be totally decoupled from everything. So running tests that don’t act on a live environment isn’t going to tell you anything you didn’t already know. Really what you are after would be better described as an ‘integration tests'. Tests that connect to the live environment that just registered the change are going to uncover things that might not be working. Depending on the complexity of your business and customizations it might be a little tricky to create meaningful tests. Once the tests are written you can use the provided PowerShell commands to upload the tests to the Microsoft environment and register when they should be run. Then when a trigger you’ve hooked into is tripped, your tests run.

Test suite setup

I’ll point out right now that the documentation doesn’t appear to be complete yet so there’s probably a better way to do some of these things but without knowing I just improvised to get things working.

For test purposes I created a unit test project that is only going to perform one operation which is to create a contact record. My ‘test’ consists of a CrmServiceClient connection that creates the record and a basic assert statement that passes the test if there aren’t any exceptions. I’ve also got an app.config file that contains the connection string populated with valid credentials. Also note that a better solution would be to use Azure Key Vault and have it read any credentials from there. It’s a bit more secure and allows for updating the values without having to re-deploy things.

Once you’ve got the code working, zip up the contents of the debug/release folder. I mentioned before that the tests get uploaded to Microsoft’s environment. Best I can tell is that the zip file containing the tests needs to be available via a URL on the public internet as opposed to uploading a local file (again maybe I’m doing it wrong). This appears to only be required initially as the documentation states, “Testing infrastructure will download and store the test suite ZIP file for future runs. Changes to this URL will not affect the test run. If you need to update the test suite, please run this cmdlet again pointing to the new test suite's URL”. I dropped my zip file on a blob store and generated a short-lived SAS token I could add on to the URL.

If you haven’t already done so, install Microsoft.Xrm.Tooling.Testing in PowerShell

install-module Microsoft.Xrm.Tooling.Testing

Then register a test suite to contain the test triggers. Based on how things get configured, a test suite will center around the tests being executed. If you’re going to want different sets of test to run for different triggers it looks like you’ll need separate suites.
  • $apiUrl = Online Management API Service URL
  • $orgId = D365 CE organization ID found on the Developer Resources page
  • $orgUniqueName = D365 CE organization Unique Name found on the Developer Resources page
  • $testSuiteUrl = URL of the unit test project zip file
  • $completionCallbackUrl = URL that will have basic test data sent to it when complete via webhook

$user = "admin@test.onmicrosoft.com"
$password = "password"
$cred = New-Object System.Management.Automation.PSCredential `
     ($user, (ConvertTo-SecureString $password –ASPlainText –Force))
$apiUrl = "https://admin.services.crm.dynamics.com"
$orgId = '38600bb5-ed48-47e4-8844-e05c5516b446'
$orgUniqueName = 'org1a1aa611'
# Test Suite
$testSuiteName = 'IntegrationTests'
$testSuiteUrl = 'https://mystorage.blob.core.windows.net/test/AutoTest1.zip?sv=SASKEY'
$testUserCredential = $cred
$completionCallbackUrl = 'https://prod-03.westus.logic.azure.com:443/workflows/...'
# Test trigger
$testTriggerName = 'IntegrationTests-OrganizationPackageInstall'
$testTriggerType = 'OrganizationPackageInstall'

Register-TestSuite -ApiUrl $apiUrl -OrganizationId $orgId -OrganizationUniqueName $orgUniqueName `
     -TestSuiteName $testSuiteName -TestSuiteUrl $testSuiteUrl -TestUserCredential $testUserCredential `
     -ApiCredential $cred -CompletionCallbackUrl $completionCallbackUrl


You should see a result of true be returned if successful.

Test trigger setup

Setup one or more of the backend triggers in the test suite that will execute your tests. Here the trigger is going to be a 'OrganizationPackageInstall' which I’ve assumed to mean when one of the system level solutions or a solution found in the D365 CE Administration Center is installed (and hopefully updated as well).

Register-TestTrigger -ApiUrl $apiUrl -OrganizationId $orgId -TestSuiteName $testSuiteName `
     -TestTriggerName $testTriggerName -TestTriggerType $testTriggerType -ApiCredential $cred


Again you should see a result of true returned if successful.

To confirm everything is set up correctly, you can run this command to see what is registered for a specific organization.

Get-TestSuites -ApiUrl $apiUrl -OrganizationId $orgId -ApiCredential $cred

It will show the suites and triggers that are currently configured.

Testing this out

You can manually trigger all the tests in a suite to ensure that everything is working.

Start-TestRun -ApiUrl $apiUrl -OrganizationId $orgId -TestSuiteName $testSuiteName `
     -ApiCredential $cred


This will return the ID of the test run.

You can check the status of a run at any point.

$testRunId = '2c5d1e62-ff35-43c3-9681-83de5fbb2235'
Get-TestRun -ApiUrl $apiUrl -OrganizationId $orgId -TestRunId $testRunId `
     -TestSuiteName $testSuiteName -ApiCredential $cred


This will output information about the run including status (running, succeeded, etc.). If complete it returns an event log but its purpose must be something other than reporting on the actual test results as it contains no references to what was executed.

Installing a solution from the Administration Center will trigger this particular event as well.

More about test results

From what I can tell so far, there isn’t a way to see if individual tests have passed or failed. That said, it might be best to add your own way of logging the test results so you can see exactly what passed and what didn’t. When the run completes it triggers a webhook with some basic information about the run. Lots of options open up to get notified so you can go and look at the results. I used a Flow which runs based on an HTTP request being received and I used the URL it generated as the CompletionCallbackUrl when registering the suite. Then it just sends an email notification with all the data.

EasyRepro tests

At the moment it doesn’t appear that whatever is running the tests is set up to handle EasyRepro / UI tests. All my attempts failed which again might mean I’m just doing it wrong. For v1 of this solution I can see where Microsoft wouldn’t have yet built out or allowed for the various supported browsers being installed on the infrastructure to allow for this sort of testing.

If you’re interested in running these types of tests you’re going to have to rely on your own test platform for the execution. In my case I’ve got EasyRepro tests set up to run in Azure DevOps (ADO). In order to get them to run based on one of these provided triggers you can either create a dummy unit test which uses the ADO APIs to execute a build or release directly or just pass the dummy test and have the webhook call a Flow where you use the built in ADO connector to kick something off.

Should the built in ADO connector not work for you, a HTTP action can also be used. Here's an example of queuing a build using the REST API.

POST  https://{your instance}.visualstudio.com/{project name}/_apis/build/builds?api-version=5.0

Headers
Content-Type: application/json

Authorization: Basic
Username: {anything}
Password: {Azure DevOps Personal Access Token}

Body
{
        "definition": {
            "id": {build definition id}
        }
}


Once that kicks off and runs its up to you to handle any further notifications and reporting on test results.

Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 1

$
0
0

What was I hoping to accomplish by doing this?


Working with Azure DevOps build and release pipelines can be a slow and tedious process, even more so using the hosted agents that are provided. I find it hard to keep track of filenames and the folders they end up in when I’ve got to keep in all in my head because it only exists for a short time and then is gone. It’s further complicated when needing to dynamically create any of these values. It feels a lot like trying to debug a plug-in only using the trace logs. Change code, deploy, execute, wait, review the log, and repeat. The difference is the waiting part is measured in minutes rather than seconds.

The goal was to create a build server that could be used to build and test things developed for Dynamics 365 CE. So that means being able to build and test .NET based plug-ins / workflows, JavaScript / TypeScript, run EasyRepro / Selenium UI tests, and be able to deploy as needed. All that, plus be faster because I’m impatient.

Containers at a high level


Prior to a few weeks ago I hadn’t put much thought into the concept of containers since the majority of Dynamics 365 CE development is done right inside the application. When scenarios came up where a plug-in wouldn’t work that functionality ended up on an existing VM that ran integration jobs or turned into an Azure Function. Functions and VMs really are very different from one another if you think about it. An Azure Function is cheap, easy to set up, and can scale but lacks flexibility when it comes to getting at lower level functionality that isn’t exposed or being able to install additional components. A virtual machine is usually expensive to run, requires constant maintenance, and is slow to start up but provides the ability to use a wider and more complex array of software.

In my eyes a container falls nicely in between. Using Azure to run the container you’ll end up paying for storage space for the images which will certainly be more than a Function but probably not more than a VM. A Function and a VM both bill based on compute time. The big different is that when a Function isn’t actually processing something it’s shut off and not adding to the bill. A VM on the other hand is accruing compute time as long as it’s turned on, whether it’s doing work or not. The pricing model for a container is closer to that of a VM but the rates appear to be cheaper and costs are calculated per second as opposed to per hour. Turning things on and off to reduce costs is more suited to containers as they can often be up and running in a few seconds while a VM could easily take a minute or more to full start up and get itself into a state where application can run.
To get an idea of the costs here’s what this is costing to run:
Roughly $114 / month if you left it running 24/7. If you turn it off when not in use then you’ll see the Container Instance costs drop.

Management is easier using a container versus a VM. On the VM there is the worry about patching and all the possible ways someone could hack in because of the various services running, open ports, etc. Windows based containers don’t run a full blow copy of the OS but rather a scaled down version (Nano Server or Windows Server Core) based on a specific build of the full OS. Less features, less chance for someone to exploit something. The other point is that these operating systems aren’t made to be patched in the traditional sense of running Windows Update. When it’s time to update you’re basically installing whatever components again from scratch on top of a new version of the OS image. Sounds painful but it’s really not once you’ve got the scripting in place (but up until that point it is very painful).

For more on containers: https://www.docker.com/resources/what-container


Plug-in compile & unit test build time comparison


I’m going to skip over the container build for the moment (covered in part 2) and go right to the end to show what kind of difference using a container made. The test case I used was compiling a bare bones plug-in and running a single unit test. As you can see from the time breakdown I think I managed to achieve what I was looking for.

Hosted VS2017 Agent
TaskTime
Queue time1s
Prepare job<1s
Initialize agent<1s
Initialize job7s
Checkout14s
NuGet Restore 1m 7s
MSBuild - Build Solution54s
Visual Studio Test Platform Installer8s
VsTest – Execute Unit Tests35s
Publish – Test Results5s
Post-job: Checkout<1s
Report build status<1s
Total3m 14s

Private Agent Azure Container Service
TaskTime
Queue time1s
Prepare job<1s
Initialize agentN/A
Initialize job<1s
Checkout3s
Command Line Script - NuGet Restore4s
MSBuild - Build Solution8s
Visual Studio Test Platform Installer2s
VsTest – Execute Unit Tests14s
Publish – Test Results4s
Post-job: Checkout<1s
Report build status<1s
Total 38s

So what are the differences?


Queue Time
Both were 1 second when only running 1 build at a time. Each agent can only run 1 job at a time by default without getting into parallel builds, multiple agents, etc. When you start lining up multiple builds back-to-back the queue times on the hosted agent are going to be considerably longer.

Initialize Agent
Not applicable for privately hosted agents.

NuGet Restore
These packages needed to be restored for the test I ran:
  • FakeItEasy
  • FakeXrmEasy.9 (@jordimontana)
  • Microsoft.CrmSdk.CoreAssemblies
  • Microsoft.CrmSdk.Deployment
  • Microsoft.CrmSdk.Workflow
  • Microsoft.CrmSdk.XrmTooling.CoreAssembly
  • Microsoft.IdentityModel.Clients.ActiveDirectory
  • MSTest.TestAdapter
  • MSTest.TestFramework
On the Microsoft hosted agent, NuGet.exe is already installed. Using the NuGet build task, before it attempts to download any packages it first needs spend a couple seconds registering NuGet.org as a package provider. Then it downloads and installs all the packages because nothing is cached. This was particularly long on the build I’m using for comparison at 1 minute 7 seconds but the faster times still were taking in the neighborhood of 40 seconds.

On the container I pre-installed NuGet.exe so instead of using the NuGet build task I used a Command Line Script task and executed something like:

"C:\Program Files\NuGet\nuget.exe" restore $(Build.SourcesDirectory)\TestPlugins.sln -Verbosity Detailed –Noninteractive

After the first run of this build, all those packages were cached locally and available so it took only 4 seconds.

MSBuild - Build Solution
I couldn’t find anything referencing the specifications for the hosted servers. The Azure Container Instance had 2 vCPUs and 2 GB of memory. I suspect that’s more than gets assigned to the hosted agents and as a result the build time is considerably faster.

Visual Studio Test Platform Installer
This is an out of the box build task which installs VSTest.Console.exe needed to run .NET unit tests. In hindsight this step probably wasn’t needed on the hosted agent since it’s already installed by default.

I spent a fair amount of time trying to get this installed on the container image without success. Again in hindsight it would have been easier to install a full copy a Visual Studio 2017 (which would have included this) instead of trying to install the bare minimum number of components I thought I’d need for a capable D365 build & test server. The flip side though is the container image becomes larger, more costly, and more cumbersome to deal with. The bright side is that once it’s installed it’s available for future use without re-downloading and re-installing. The build task is smart like that and first checks if it’s there before blindly installing. That 2 seconds was just to check if it was installed. The bigger reason I wanted to get it installed was to simplify and reduce the number of steps a person would need to go though to create a build. It’s just one more thing for someone new coming in to forget and have to waste time on because the tests won’t run.

VsTest – Execute Unit Tests
I again attribute the difference to the virtual hardware specs likely being better.

Part 2 will cover what went into the creating the container.

Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 2

$
0
0
In Part 1 I covered the reasoning for choosing a container and the performance gains that resulted in the build process.

Creating the container image

I’m not the first person to think of using a container for an Azure DevOps build agent by any means. There are a number of people who’ve blogged about it and you can get pre-configured, Microsoft created Azure DevOps agent Dockerfiles from Docker Hub. The problem with what’s on Docker Hub and most of the blogs is that most everything is Linux. Fine if you want to build .NET Core projects but Dynamics 365 CE needs the regular .NET Framework which isn’t going to run on Linux. From what I could see, Microsoft doesn’t have a Windows based agent on Docker Hub so I set off to build my own.

Remember the goal was to create a build server that could be used to build and test things developed for Dynamics 365 CE. So that means being able to build and test .NET based plug-ins / workflows, JavaScript / TypeScript, run headless EasyRepro / Selenium UI tests, and be able to deploy as needed. Here’s what I came up with. I’m sure it’s probably not the best or cleanest way of going about this, but it works.

Dockerfile



start.cmd



start.ps1



Dockerfile


Image Creation Basics
As you can see, to get software above and beyond what comes on the base OS image you need to write PowerShell or DOS commands to obtain and install.

When looking through the script and you see this, that means the follow commands are executed in PowerShell.

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]

When you see this, that means the follow commands are executed at the Windows command prompt.

SHELL ["cmd", "/S", "/C"]

Everything that get executed starts off with RUN (environment variables are ENV) and you’ll notice that multiple commands are chained together to execute in one batch. The reasoning for this is due to the way Docker assembles or layers the images. Each RUN command creates a new layer on top of the OS with the changes made. Layers are cached for reuse as the image is assembled. The more items you can group together, the less layers you end up with. This can be a good or bad thing.

If there’s a failure building the image on a particular layer, instead of starting over from the beginning (and potentially wasting a lot of time) it will used the cached layers and start again at the one that failed. If the failure happens in the middle of 12 items chained together, it will have to start over at the first item in that layer. If you successfully create an image and then make a change someplace in the middle, it will use the cached images prior to the change but then need to re-perform all the actions on subsequent layers as that change might effect them. As such, it’s better to put the items that will change the least first and then work your way up toward ones that might get versioned more frequently. If I revise this image I’ll probably put more thought into that and reorder some things.
Some of these commands aren’t easy to test and escaping characters combined with environment variables and file paths with spaces in them gets pretty confusing. I ended up doing a lot of trial and error to get some of those combinations correct which was frustrating but it would have been even worse had the image creation needed to start from scratch each time.

Base image
I ended up using this image: microsoft/dotnet-framework:4.7.2-runtime-windowsservercore-ltsc2016

I had originally picked a later build of Windows Server Core but when I went to deploy to the Azure Container Registry (ACR) I found out that it doesn’t support that version of the image so I had to start all over with this image. ACR only supports Long-Term Servicing Channel (LTSC) versions of Windows. LTSC being versions of Windows that will be supported for 5 years and then have 5 additional years of extended support. It should also be mentioned that you don’t need to acquire the base OS for the container if you’re using something from a public repository. If you are using Docker to build the image, it will automatically download the image for you to use. This could be just an OS, or an OS with software already installed. In that case buyer beware I suppose. As far as licensing is concerned, Windows 10 allows you to run containers for development purposes. For production use, the Windows Server license of the host covers containers. I’m assuming then if you’re running the container in Azure, licensing is also covered.

NuGet
Downloads and installs NuGet.exe so we don’t have to use the Azure DevOps NuGet build task later.

Visual Studio Build Tools & Web Targets
This downloads and installs the Visual Studio test agent, base build tools, and the specialized build tool packages for the different things related to Dynamics 365 CE we might want to build. It also removes the install files once complete. You’ll notice it’s installing the test agent but it doesn’t seem to be compatible with what is required for running tests from Azure DevOps. What it does do is ensure the agent reports to Azure DevOps the test agent is installed to satisfy build task demands. If the agent doesn’t show it meets the demands or requirements needed to run the build, it won’t even start. You could skip this and manually edit the list of demands for the agent in Azure DevOps, but the downside is that you’d need to do it for every hostname the container gets. When you stop and start a container it receives a new and different hostname which then shows up separately in the list of build agents. Not a big deal but after awhile you’ll end up with a long list of dead agents that will need to be manually deleted. There’s probably an API someplace for that but the ideal solution would be to figure out a way to set or keep the same hostname each time. That might not be possible considering you can start any number of container instances and having the same name probably wouldn’t fly.

Path Variables
So things like NuGet can be executed from anyplace in the file system.

.NET Targeting Packs
For building against different versions of the .NET Framework.

Build Agent
This installs the build agent software so the container can communicate with Azure DevOps. This url comes from Agent Pools page.

PowerShell Modules
This sets up NuGet.org as a package provider and installs the Azure and Dynamics 365 CE specific PowerShell modules we might need later. Having these pre-installed should prevent the need to import them each time in scripts requiring them.

Suppress Error Dialogs
Not sure this is 100% necessary but these registry entries should prevent Windows error dialogs which might hang something up.

Chocolatey Installs
Downloads and installs Chocolatey which then can be used to download other software packages. In this case Git, Node.js, and the Azure CLI. If you aren’t familiar with Chocolatey it’s package management for applications. Lots of commonly used software can be installed and updated from here via script.

NPM Installs
Grunt and gulp task runners.

Google Chrome Install
I tried installing Chrome via Chocolatey but ended up having problem getting it to work. It’s hard to say if that version was the issue or if something else might have been wrong with the image during the testing. Once installed it appears to be v71 which is the latest while I’m writing this. I’m not 100% sure what’s special about the version that comes from the url I’m using but it seems to be used widely across the internet for silent installs. I should also point out that EasyRepro currently supports v2.38 of the Selenium Chrome driver. That version only lists official support for Chrome v65 – 67. So maybe it’s just luck that it works and this is an unsupported set up. I tried updating the Chrome driver to a version which supports the latest Chrome browser version and then had issues running the tests. Since that’s all kind of a mess I’d recommend just sticking to a known working combination like this.

Start.cmd

Just used to run start.ps1.

Start.ps1

This is used to configure and start the build agent when the container starts. I pulled this from the blog which is noted at the top. This needs your Azure DevOps instance name and a Personal Access Token (PAT) in order to connect to your agent pool. I’m sure there is probably a better way of handling those values other than hard-coding them into this file, I just haven’t had a chance to look for it yet.

Conclusion

I’m glad I took the time to work through setting up a container because I learned a lot. I’m also glad I wrote these 2 blogs, when digging into the how’s and why’s I learned a lot more.

Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 3

$
0
0
Part 1 covered the reasoning for choosing a container and the performance gains that resulted in the build process.

Part 2 reviewed the components that were chosen to be installed to support the build process.

Setup

Maybe a little bit out of order but this is going to focus on getting up and running with containers on Windows. Again there’s almost certainly different ways but this is what I ended up doing.

Download & install Docker Desktop for Windows– you’ll need to create an account on Docker Hub before it will let you download. Select the option to use Windows containers (if you miss it you can switch later) during the install. When that’s done start it up and log in with your Docker ID. When you see it running in the system tray, right-click and see if any of the menu items say “Switch to Windows containers”. If so, select that. Either it doesn’t get set correctly during install or it’s doing something else that I’m unaware of.

If you want to create the image I was using, download the 3 files from Part 2  or get updated copies from the GitHub repo and put them in a folder. Edit start.ps1 with your Azure DevOps instance name and Personal Access Token.

Open a PowerShell prompt as Administrator and navigate to the new folder. 

Building the image

Here’s the command to start building the container. The period at the end is part of the command. The “-t” flag allows you to assign a name of your choosing and a tag like a version number to the image. The “-m” flag here is capping memory use to 2GB. This configuration will used any cached layers in the process to save time.

docker build -t d365agent:v1.1 -m 2GB .

If you don’t want to use layers from the cache for whatever reason you can disable the cache by adding the no-cache option.

docker build –no-cache -t d365agent:v1.1 -m 2GB .

Prior to doing this I’d recommend that if you are on a laptop or have power saving options enabled, you might want to adjust so that your machine doesn’t go to sleep while this is running. That seems to hang the process up and you’ll need to restart from the point at which that happened. If it does happen to stop, you can restart from where you left off by running the same command (with cache enabled). If you make changes prior to restarting, the process will start at the lowest level that was changed.

You should see it start downloading the different parts of the Windows Server image and then starting running through the installation steps defined in the Dockerfile. This process is going to take awhile the first time through since it’s going to download close to 6GB. Once it’s downloaded the base OS image will be cached for future use. Keep in mind as well all the items getting installed need to be downloaded so you might as well grab a coffee and come back because it’s going to be running for awhile.

When that finishes we now have an image that can be used in a container.

Creating and running a local container instance

You can create a new container instance using this image on your local machine using the following command. We’re telling it to automatically restart in case it gets shut down, specifying some names, and finally telling which image to use.

docker run -it -d --restart always --name "d365agent" --hostname "d365agent" d365agent:v1.1

Now that container instance is created you can use the start and stop commands to turn it on or off.

docker start d365agent

docker stop d365agent

If you want to access the filesystem of the running container use this command and you will be dropped into a PowerShell session in the container. This is good for verifying things got installed properly in case the logs during the build process don’t provide enough information.

docker exec -it d365agent powershell

Use the exit command to end the session. Be mindful of which filesystem you’re in, it’s easy to forget at times which you are connected to.

If you’ve provided the correct Azure DevOps information, you should be able to check the available agents and see your container. If you don’t specify an agent pool name in start.ps1, it will go into the default pool. Now when run a build and specify this agent pool it will run on your local machine.


Creating an Azure Container Registry

Next you’ll want to create an Azure Container Registry which will be private by default. There aren’t many options to pick from so I’m not going to cover it. If you want screenshots refer to the quickstart. The only special option I picked was to enable the Admin User. All that does is allow you to use the registry name as the username when publishing images. In reality you maybe wouldn’t want to do that for a production instance seeing as if someone figure out your registry name they would also then have the username leaving only the password to figure out. Unlikely but anything is possible. Otherwise it’s an individual login or service principal.

At this point you can go through the process of setting up GitHub integration and build your image in Azure but I’m just going to upload the one from my local machine that I’ve already created.

Again open a PowerShell prompt as Administrator and navigate to the folder with your Dockerfile.
First you’ll need to log into the Azure Container Registry with this command replacing MyRegistry & AccessKey with the appropriate values from Settings –> Access Keys.

docker login MyRegistry.azurecr.io -u MyRegistry -p AccessKey

Next tag your local image for the Azure Container Registry with this command. It’s pointing at the local image we created earlier and it’s going to be placing it into a repository named “d365/d365agent”. The first “d365” acts as a namespace to categorize different images.

docker tag d365agent:v1.1 MyRegistry.azurecr.io/d365/d365agent:v1.1

Then use this command to upload the image.

docker push MyRegistry.azurecr.io/d365/d365agent:v1.1

It will need to upload all the image layers so again there will be some waiting involved.

Creating an Azure Container Instance

The only thing left to do is create an Azure Container Instance from the image. If you navigate to Repositories you will see “d365/d365agent”. If you drill in further you’ll see the initial version of the container image. From there you’ll have a Run Instance option. Once again the steps are pretty simple but if you want the visuals check out the quickstart.

Make sure you choose Windows as the OS type since it’s not the default. Use at least 2 GB of memory and I’d try for 2 processor cores. You might be able to get away with 1 if you aren’t using EasyRepro. When I tried to run those tests with just 1 core it consistently failed. Take note of which Resource Group you use as you’ll need to go there to manage the container instance once it’s created.

Once that starts you can sit back once again and wait while the container is provisioned.

Once that’s finished you should see something like this in Azure DevOps.

If you aren’t actively doing anything with the container instance I’d shut it down to save money.

Get Latest Solution Patch Name With PowerShell

$
0
0

Someone might find this useful if trying to move solutions around using Azure DevOps & PowerShell. I wanted to automate export of the latest solution patch using the Microsoft.Xrm.Data.Powershell library as part of a build pipeline. It’s pretty straightforward but it requires the solution name, simple for the base solution but when dealing with a patch it’s not possible to predict what the name is going be in order to build it into your script.

I came up with this. Given a solution uniquename this sets build variables for the uniquename & version of the latest patch of a solution or the base solution's uniquename & version if no patches exist.

How to get Client ID and Client Secret Key in CRM 2016?

$
0
0

I have created Online Dynamics 365 30 days trial version.

I want Client ID and Client Secret Key to call Web API using OAuth 2.0. How to get them?


Using Chocolatey to Distribute Developer Tools - Part 1

$
0
0
First off, what is Chocolatey? Chocolatey is a package manager for applications. So NuGet for Windows quite literally as the package formats are the same. There’s one or more repositories (again it’s just NuGet) that keep track of all the packages available and then you connect to it using a client and download and install an application. It’s similar to NuGet in the sense that the community can contribute packages but there appears to be a lesser amount of contributions by “official” sources like Microsoft, Google, Apple, etc. However that doesn’t mean you aren’t able to download and install software from these companies from Chocolatey. Applications like Visual Studio, Chrome, and iTunes have all been download, most likely from a publicly available link, and converted into a package by someone in the community. I should call out right away that this isn’t meant to circumvent licensing or pirate anything. The original applications aren’t being modified in any way, just having some scripting added around it to automate installations. Since you’re basically relying on “some guy in his basement” to create these packages there is some amount of risk still that you could download something harmful. Granted the packages are virus scanned and community moderated but things can still slip through. No different than downloading something off NuGet, GitHub, or even the XrmToolBox for that matter. This just happens to be one of many approaches you could take if you are in a position to assert more control over the things your users / developers are downloading and installing. Making commonly used software easily available for people to install themselves lessens the need for them to search around the internet and potentially downloading something bad. Not to mention, self-service software installation can take some of the burden off the support team and free them up to do or things.

What goes into a package?


At the core there is a XML file which contains the metadata about the package, things like title, version, description, licensing info, dependencies, etc. can all be found here. This will be the source of information everyone sees in the download feeds. Instead of DLLs like a typical NuGet package, you’ll usually have an executable file of some sort or the files needed to run the application if an actual installation isn’t required. Around this is wrapped some PowerShell code which takes the downloaded content and performs the tasks required to get the application to a useable state. Ideally there should be no interaction from the user as one of the big selling points of Chocolatey is for system administrators to use it to silently install and manage software across a large number of machines. Each package is versioned by the repository so you can always go back to a previous version if need be and just like NuGet packages are immutable. Once it’s published that’s it, no changing it without increasing the version number. It can be painful if you’re developing packages but for consumers it can prevent the old bait and switch by tricking people into downloading something bad which once worked perfectly fine. Chocolatey will also keep track of what is installed on your machine and make updates available as they as released. Additional PowerShell can be put in place to run during the uninstall process so package creators can add any code required to assist with any cleanup that might need to happen on top of the application’s normal removal process.

But I’m a Dynamics person and not a system administrator


True, this is maybe geared a little more toward organizations rather than individuals but that doesn’t mean it couldn’t be put to use for personal use. For developers, just think about the next time you need to rebuild you primary development machine. Hunting down all the installs and clicking through everything is about a day long process, at least it is when I need to do it.

This is my example use case – creating a way to more easily install the Dynamics 365 CE SDK tools (Plug-in Registration Tool, Package Deployer, etc. ). Back when there was a single download for the SDK it was easy because all the tools were right there. I can understand why managing that was probably difficult and going to a solely online only based SDK was the way to go but it also introduced some new challenges making these tools available. The new process is documented on how to download them from NuGet using PowerShell but I’ll still say it would have been easier just to point people to the manual download link and then rename .nupkg to .zip and be done with it. Imagine this situation, you’re dealing with someone who isn’t really a developer but has just enough knowledge of the platform to know how plug-ins work and you’re trying to describe to them over the phone how to edit a plug-in step. You’ll end up asking them if the have the Plug-in Registration Tool installed and of course they’ll say “no” so then you start telling them them need to download it from NuGet and they’ll usually end up responding something to the effect of, “WTH is nugget?”. So then you just ask for remote access and do it yourself.

Getting started


Install Chocolatey
https://chocolatey.org/install
Now you’re probably thinking that this is all command line stuff that you don’t want to memorize just to make it “easier” to install the latest version of an application. Not to worry, there is a UI that can be installed to make things a little more friendly.

Install Chocolatey UI
https://chocolatey.org/packages/ChocolateyGUI
At this stage you should have Chocolatey up and running and can download packages the community has already made available.

Creating and hosting your own packages


You could go through the process of creating a package and then upload it to the main Chocolaty site but obviously if you’re creating something solely for personal or private use, public hosting it isn’t an option. The other thing I’d throw out there is that it might be a violation of some terms of service or other legalese somewhere that forbids redistributing copyrighted material in this way, so if you don’t want “the lawyers” giving you a call you better look for another alternative.

Remember way back in the first paragraph where I mentioned that Chocolatey and NuGet are basically the same thing? That means there are a number of ways to create your own private NuGet server. The software itself is open source so it if want to stand up your own copy and manage the hosting that is an option. There are also some paid services / products which may work for you. ProGet offers a commercial product that you run on your own server (which is a pretty good deal if you’ve got a lot of users and need AD integration) or MyGet which is a cloud based offering that has paid and free plans.

Instead of those I’m going to focus on using Azure DevOps to host packages & feeds and to automate the package maintenance process. Azure DevOps is free up to 5 users or included with certain MSDN subscriptions. Package management (Azure Artifacts) is a paid add-on which is available with the same free access and then charges per user, per month after that which last time I checked starts to get pricey for when you’ve got a lot of developers.

In part 2 I’ll go into the project & package set up and the code used to keep the packages up to date. As I mentioned earlier there will be examples showing how to convert the SDK Tools into packages as well as how you can replicate packages from other feeds (like Chocolatey.org) for use in your own curated feed. Part 3 will covert how to create Artifacts in Azure DevOps and how you can use a Build Pipeline to watch for new versions and automatically update the packages. That last part is probably the most important. Creating an installable package to only use it once is a waste of time. The real benefits come from installing or updating multiple times on multiple machines.

Using Chocolatey to Distribute Developer Tools - Part 2

$
0
0
This is going to focus on the Chocolatey package set up and the code used to build updated packages.

Project structure


A single git repository which contains folders for each package / application / tool. 
Each package folder contains the following:
  • .nuspec file which provides the metadata about the package
  • PowerShell file containing the scripts that will go into the Azure DevOps Build Tasks to:
    1. Build the updated package
    2. Push to the NuGet feed
    3. Commit the updated files back to source control
  • Tools folder containing 2 PowerShell files
    1. chocolateyInstall.ps1 which handles the installation once it’s on the target machine
    2. chocolateyUninstall.ps1 which handles the uninstallation from the target machine

.nuspec file


You can choose to change the metadata to whatever you’d like but I’m going using what the original packages contains. Later you’ll see in the code used to update the packages, I’m just reading from the downloaded content and updating the matching elements.
The package <id> you’ll want to change to something unique, at the very least a variant of the original. Assuming you’re running the package update process on the same place as your feed (like Azure DevOps) you might run into an issue where it uses your private feed first to retrieve the package as opposed to the public source you actually intended. I’m guessing that’s by design but the issue I ran into popped up in the script comparing the version in the .nuspec in source control versus the package that is publicly available. Since we aren’t specifying a version (because we won’t really know what it will be once this process is automated) it’s pulling back the first copy it finds based on the registered sources. When the compare happens it sees the same version and aborts because it doesn’t show there’s an update needing to happen.

If you’re going to have the process use the public package for metadata you can get away with filling in any dummy data and after the first pass it will get updated to the real values. If you want to look at what’s there ahead of time you could use this simple PowerShell to retrieve the package metadata.


Be careful of the version numbers as once you’ve pushed to the feed you cannot delete a package, you can only un-list it or publish a new version with a higher version number. If you use your own version numbering scheme it’s probably not as big a deal but more than likely you’ll want to use the same version number as the public package.

Full .nuspec documentation: https://docs.microsoft.com/en-us/nuget/reference/nuspec

Package building scripts



Part 1 – building the package


This is used in a PowerShell build task and starts off with variables for the public package we’re using as a base and the corresponding file/folder locations in the project.

Next is the version comparison between the latest version of the public package and what is currently in the .nuspec file. The Plug-in Registration Tool is on NuGet so we use NuGet.org as a source. If you’re using a Chocolatey packages as the source then you’ll need to use Chocolatey.org as the source. Using Azure DevOps, NuGet is already registered as a source but Chocolatey is not in which case you’ll need to use Register-PackageSource and target Chocolatey.org (example). Once the comparison is made if the version hasn’t increased then the process stops. Before doing so a Pipeline Variable is set so we can use it to prevent any future build tasks from running. The other thing of note is the <files> section which determines what should be included when the package command is run. In this case we want just the tools folder and the .zip file containing the actual content and we’ll be ignoring the readme file and this PowerShell script file.

Once it’s been determined an update is needed, the public package is downloaded. As part of the process I’m giving it common name that corresponds to a value in the install file. This is solely to have one less thing to change when reusing this code.
From the package used in the version comparison I’m updating the .nuspec in my package. This isn’t a necessity, just more for information purposes. I’m also setting a Pipeline Variable here with the new version number so I can use it in a later build task. 

At this point the only thing left is to run the Chocolatey command to create the package from the content of the current directory based on what is defined in the .nuspec file. Chocolatey is installed by default on the Azure DevOps hosted build agents but it you’re running this from anyplace else you’ll need to run the PowerShell command to install it first.

Part 2 – publishing the package


This is used with a NuGet build task to upload the completed package to the feed which is going to host it. Specific to Azure DevOps I’m using a Personal Access Token that has read/write access to Packages to authenticate since the feed isn’t public. Also note that I used the NuGet v2 endpoint as opposed to the newer v3 endpoint. The format looks like this:

https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v2

Here’s where you’ll run into a 409 error uploading the package if you try to upload a package with a version number that already exists. I’ll cover more about the feed set up in part 3 of this blog.

Part 3 – Committing the changes


This is used in a separate PowerShell build task run after publishing. Since I’ve made modifications to the .nuspec file they’ll need to be committed back to source control so the next time this runs the package process won’t run unnecessarily after a version comparison. As part of the comments I’m using the Pipeline Variable I set earlier with the new version number we’re updating to. In the Azure DevOps build you also need to enable to option Allow scripts to access the OAuth token so that we can pass the System.AccessToken variable in the request header to authenticate.

Installing & uninstalling


In this case these scripts are very simple. After a user chooses to install a package it’s downloaded to the local machine, at which point  chocolateyInstall.ps1 kicks off. The SDK tools aren’t installable applications so “installation” is just a matter of copying the files somewhere. I chose the user’s Application Data folder and am creating a folder structure that will support installing tools from multiple sources. The Chocolatey unzip command will handle extraction and creation of any folders in the destination path that may be required. This is just unzipping my renamed package file I downloaded from the original source. To complete things I using the Chocolatey create shortcut command to make a short cut on the user’s desktop to the folder containing the executables.

When things get installed, Chocolatey is keeping the original package around so it knows what is on the machine to determine if updates are available, do reinstalls, and to access the uninstall script. The location will be similar to: C:\ProgramData\chocolatey\lib\YourPackage

As you might have guessed chocolateyUninstall.ps1 will run when the user chooses to uninstall the package. Since it’s not an installed application I’m just deleting the things I created during the install and deleting the package from the lib folder. There are a number of different Chocolatey commands you could use to uninstall but at the time when I worked on this I couldn’t get the one I believe I was supposed to use to work so I went down this path. For the full list of Chocolatey commands, check out their documentation.

That covers the package creation process. Part 3 will look at the setting up Azure Artifacts & Build Pipeline to keep things continuously updated as well as how to use the new feed in the Chocolatey clients. In case you missed in, part 1 gave an overview of Chocolatey and why you might want to use it in the Dynamics 365 space.

I’ve got everything up on GitHub so you can refer to that incase I don’t end up explaining something clearly enough

https://github.com/jlattimer/D365Chocolatey

Visual Studio 2017 - Dynamics 365 SDK cant connect to server

$
0
0

1. I downloaded the Developer extensions and the developer toolkit. (vsix for dynamics)

2. Updated the manifest with the version 15 instead of the 14.

3. Got both installations completed 

4. Instead of map the folders installed I mapped the path to the most recent version of dlls.

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/download-tools-nuget

And now I cant make the connection. When I tried to start a new project I get the window that asks if you want to reconnect to crm and it shows my server: organization: and user: do you want to use this connection I click yes and then I get this error:

Using Chocolatey to Distribute Developer Tools - Part 3

$
0
0
This is going to focus on setting up Azure DevOps to host the NuGet package feed and keeping the packages up to date. In case you missed the other posts in this series, part 1 provided an overview of Chocolatey and why you might want to use it as a D365 developer and part 2 focused on the package content and the scripts used to automate creation.

Feed setup


In your chosen Azure DevOps project head to Artifacts. If you don’t see that as an option, make sure under Project Settings that Artifacts are turned on. From there it’s just a matter of thinking up a name for the new feed.  Once created, use the Connect to Feed option to retrieve the url. We need this for the NuGet build task to push the package once it’s created as well as connecting from the client. The v3 endpoint url will be displayed here but I’m using the v2 endpoint because it was the only way I could get it to work. It’s straight forward to convert from v3 to the v2 url.

v3: https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v3/index.json

v2: https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v2

If you haven’t created a Personal Access Token (PAT) that has read/write access to Packages yet, do so now.

Pipeline setup


I created a different build for each package in the project.
The source will be the git repository hosting the project. This way the first thing that will happen is the project code will be downloaded to the build agent. We’ll need to reference the files for building the package around whatever we download as well as to check the current version we last built versus the current public version.


Step 1: PowerShell Script: Build package


You can inline the code from #1 in a PowerShell build task. If it’s determined a new package is available, the script will download the latest version and build the Chocolatey package from it.

Step 2: NuGet: Push to feed


Originally I want to make everything into a single PowerShell script and just use that but I wasn’t able to get the authentication working using the ApiKey parameter along with a Personal Access Token. Luckily using the existing NuGet build task worked fine. Use the command from #2 and select custom as the type. Additionally we don’t want this step to run if a new package wasn’t built, so to prevent that expand Control Options on the task and then Custom Conditions. Then use this to check the Pipeline Variable created in step #1 to determine if the task should run or not with this snippet.

eq(variables['ContinueUpdate'], 'true')


Step 3: PowerShell Script: Commit Updates


Inline the code from #3 in another PowerShell task to make sure your updated .nuspec file gets back into source control so it can be used next time the process runs. You’ll also want to make sure that your .gitignore file excludes .zip and .nupkg files so we don’t accidently store redundant copies of those. Don’t forget to add the same Control Option change as step 2.

Step 4: Build Triggers


The last thing is to set up a schedule so it keeps itself updated, this can be done from the Triggers tab inside the build. I’m using the free hosted agent and it doesn’t allow parallel builds so I staggered the start times on the day of the week I have this running on. You could very well run them all at the same time and they’d just queue up and run one at a time anyway.

Done!

Chocolatey GUI


Here’s what you end up with after all this work. Click install and you’ll have the latest version of the tool installed in a few seconds. Click uninstall and with any luck (if you cleaned up after yourself properly) everything will be gone.





To connect to Azure DevOps go to Settings and then Sources.
  • Id: Display name for the feed
  • Source: The v2 Azure DevOps feed url
  • Username: Anything
  • Password: The Personal Access Token created earlier (or a different one with Read access to Packages). Using the normal username / password combination wasn’t working here.

A few other notes


The download counts, package size, or package image don’t get displayed. That seems to be an issue between Chocolatey GUI and Azure DevOps. Setting up a feed from a different private NuGet source didn’t do this.

Download all the code here: https://github.com/jlattimer/D365Chocolatey

Connect Dynamics CRM to Asp.Net Core 2.0

$
0
0

Can anyone refer me to documentation that shows how to connect an Asp.Net Core App to CRM online?

Things I found based on my research:

  1.  Web Api seems to be a logical way to get/query data from CRM.
  2. I haven't found a detailed document on how to get CRM to connect to Asp.Net Core application.

Thank you in advance.

Viewing all 869 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>