Quantcast
Channel: Jason Lattimer's Groups Activities
Viewing all 869 articles
Browse latest View live

Run Your Automated D365 CE UI Tests From Azure DevOps

$
0
0

EasyRepro

If you hadn't heard about it already, EasyRepro is a UI testing framework for Dynamics 365 CE built on top of Selenium which is one of the more popular UI testing frameworks available today. Those who might have tried using Selenium directly with D365 CE in the past only found pain and suffering. For most people that ultimately lead to the conclusion that it wasn't worth investing huge amounts of time creating tests for little upfront return. In my opinion EasyRepro now makes creating UI tests feasible as it abstracts away most of complexities involved with Selenium development and boils down CE interactions to a single line of code in many cases. At the same time it’s still flexible enough to extend or use underlying core Selenium functionality. If you’re at all interested, download the Git repository and give it a test drive.

Running Tests in Azure DevOps

Once you've cloned the EasyRepro project and ran some of the examples on your local machine you'll quickly realize that you can't tie it up all day running tests. So if you’re planning on running dozens or hundreds of these tests you’ll need to find an alternative place to run them from. Azure DevOps (ADO) can be that alternative because it offers solutions for 2 different scenarios when it comes to running automated tests. These being running a group of tests as part of a build & release pipeline and the other being able to have a non-developers run individual tests on demand.

Project Set Up

For example purposes I'll use a Visual Studio test project referencing the Dynamics365.UIAutomation.Api (EasyRepro) libraries from NuGet. I've additionally added in a reference to Microsoft.Azure.KeyVault to handle credentials so we don't need to worry them getting into source control or having to worry about replacing them in any of the ADO processes. For this example we’ll just be using Google Chrome as the browser of choice for testing.

Build Set Up

Assuming tests are created and things are working locally, get the project checked into ADO so we can set up a build. It's going to work like a normal .NET project build.


Start off by creating a new build in ADO using an empty job. I’ve got a repository with the EasyRepro test project that I’m pulling the source from initially. This is executing on a private build agent (running inside a Window container) so I’m just using the command line to do a NuGet restore but you could also use the standard NuGet build task. Build the solution or project like normal using the release configuration. Then probably the most important step, use the Publish Build Artifacts build task to publish the bin/release folder to Azure Pipelines/TFS. This is what makes the compiled assembly and other references available to the release we’ll be setting up to run this.

To keep things updated, under Triggers, check Enable continuous integration so that the project gets rebuilt and published each time an update is made.

Release Set Up

In order for ADO to be able to run tests on demand we need to create a release pipeline. Again don’t start with an existing template, just pick the empty job. Select Artifacts & Add and then choose Build and use your project and the build pipeline you just created as the source. You have a few different options for the default version to use but I’d probably set it to use the latest each time.


In the first and only stage, open the tasks to begin setting up the test run. I start with a Visual Studio Test Platform Installer task. You might need this if you’re running a private build agent. If you’re using a Microsoft hosted agent you shouldn’t need this because it’s already installed but it being there won’t hurt anything. Then add a Visual Studio Test task and user version 2.*. Pay close attention to the configuration. Select tests using aTest run. Make sure the box is checked that indicated the Test mix contains UI tests. Last, make sure the test platform version is using the version Installed by Tools Installer.




Depending on the build agent, you may or may not need to install Chrome (or other browsers for that matter). If things aren’t working, try installing it with the following PowerShell script:

$Path = $env:TEMP;
$Installer = "chrome_installer.exe";
Invoke-WebRequest "http://dl.google.com/chrome/install/375.126/chrome_installer.exe" -OutFile $Path\$Installer;
Start-Process -FilePath $Path\$Installer -Args "/silent /install" -Verb RunAs -Wait;
Remove-Item $Path\$Installer

Test Plan Set Up For Manual Test Execution

Hooking individual tests up to test cases is only required if you want the ability to run them on demand from the Test Plans area in ADO. It works better if you can link the release that was created to the test plan but I’m pretty sure you’ll need either VS Enterprise, Test Professional, MSDN or Test Manager to do so. If you’ve already got a test plan, right click on in and go to Test plan settings. Select the build that was just created and leave the build number at latest. Then choose the pipeline and stage from the release setup.


Assuming you’ve got a Test Case created that matches one of the EasyRepro tests, head back to the project in Visual Studio, open up Test Explorer, and find the test. Right-click on it and choose Associate to Test Case. It’s probably a good idea to make sure you’re connected to the project in ADO before you do this.



Type in the Test Case Id then Add Association. Save and close and when you open the test case in ADO, the automation status should now say ‘Automated’. Repeat for any other tests.

Under Test Plans you should be able to select individual or groups of tests and then under the green Run button pick Run for web application to execute. Having the release tied directly to the test plan saves the user having to choose that same data each time which is a few extra clicks and possible confusion.


A window will open confirming and validating the tests. If there aren’t any errors you can select View test run and sit back and wait for the tests to complete. With any luck they will, otherwise you’ll need to retrace your steps and figure out what’s wrong.



If you aren’t able to edit the test plan you can use Run with options and it will open a window where you can choose the items that got configured when linking at the test plan leave. When selecting the build the user will need to either know the build number or know enough to select find and the pick the correct (likely the most recent) build and pipeline from the list. 

Running In Builds & Releases Automatically

Really it’s just a matter of combining the all the steps from the Build and Release setups above with the exception of the publish build artifact task.


And that’s it. Not much to it once you figure it out so hopefully this saves you the trouble.

CRM Numeric Workflow Utilities v2.3.0.0

Call an external Web Service from Plugin - Dynamics crm ONLINE

$
0
0

Hi all.

I'm facing a Permission error when i try to call an external WebService from my plugin.

Notes:
1. The web service was developed by another company and they said that they could not put a friendly name (dns) for this url
2. The web service is on IP format (http://200.279.18......).

So, for a temporary solution, i did a Windows Scheduler console APP that runs every 1min calling the Web Service and update the Quote entity. But i need this web service executing on real time.

Any help?

Dynamics CRM 2016 (Online) WebAPI Query

$
0
0
I want to make string desc as int desc while using web api query.

ex) string desc -> 2, 10, 1
int desc -> 10, 2, 1



Query : new_projects?$select=new_dt_start,new_txt_project_no&$orderby=new_txt_project_no desc

Is there any way?

Keep Your EasyRepro NuGet Package References Correct

$
0
0
More than once while working with EasyRepro projects I’ve found myself in a situation where tests that were once working inexplicably stopped. After combing through code and verifying credentials I eventually figured out that references to one or more of the required Selenium components somehow got updated without my knowledge. An example of when this can be particularly frustrating is when the Chrome driver gets updated to the latest version which works with the always updated version of Chrome installed on my machine. Everything works fine when running tests locally. When deploying to Azure DevOps and running on an agent where an older version of Chrome is installed, everything fails because the latest driver doesn’t support older browser versions.

To avoid this issue I created a PowerShell script which will reset the Selenium component versions referenced in the project to what EasyRepro supports. Luckily this mismatch between driver and browser doesn’t seem to effect the opposite scenario of what I previously described, at least when Chrome is being used.

Older driver version + newer browser version = OK
New driver version + older browser version = NOT OK

Code


When this runs it will update the packages.config file in the project and make sure that the versions listed at the beginning of the script match. If there is a mismatch it will also update any references in the project file. If this makes an update when the project is open in Visual Studio you’ll be prompted about a conflicting modification (because of the background update), go ahead and select Overwrite and everything should be good.

There are 2 ways of using the script.

Use during development

1. Add a folder name Tools to your test project and add to it this script and a copy NuGet.exe.
2. Open the .csproj for your project and add these lines:

<Target Name="FixEasyRepro" BeforeTargets="EnsureNuGetPackageBuildImports">
   <Exec Command="powershell.exe -NonInteractive -ExecutionPolicy Unrestricted Tools\FixEasyReproPackageReferences.ps1" />
</Target>


This will execute the script prior to the automatic NuGet package restore which happens before the actual project build.

Use in Azure DevOps

The package restore and build process works a little different in Azure DevOps. The recommended approach is to use a NuGet build task to restore prior to executing the build task. The script will still run however the automatic package restore will not happen. If there was an update made you’d likely see a build failure because NuGet had already restored an incorrect package version. In order to maintain the modification to the project file so it works locally, add a PowerShell task which executes the script before the NuGet task runs. This will correct any mismatches so that the correct versions are restored. When the script re-runs during the build step, everything will already be correct and the build should complete.

Capture Pictures & Video From EasyRepro Tests

$
0
0
It goes without saying that tests are going to fail from time to time. Luckily EasyRepro does a pretty good job of providing descriptive error messages to make troubleshooting issues easier. Inevitably the first person to deal with an issue is going to be the developer and most likely they'll need to re-run the test on their local machine to watch exactly what happens. Debugging or at least offloading some of the triaging tests can be made easier by capturing screenshots and/or videos of tests as they are running.

Screenshots

Alone this isn’t anything new as EasyRepro already has TakeWindowScreenShot which does a screen capture. It expects a path & file name and image format for parameters. I like to have the name of the test in my image name. We can use the TestContext to get the name of the test currently being executed for the file name.

You can get the TestContext object when it’s passed to the ClassInitialize method.

private static TestContext _testContext;
[ClassInitialize]
public static void SetupTests(TestContext testContext)
{
    _testContext = testContext;
}


I created a helper method to use when needing a screenshot. It will create a folder for the images, create a unique filename based on the test name & date, and then call TakeWindowScreenShot to grab the browser screen and save to disk. It also adds the file to the TestContext which is important if running tests in Azure DevOps.

private static void ScreenShot(InteractiveBrowser xrmBrowser, TestContext testContext)
{
    const ScreenshotImageFormat format = ScreenshotImageFormat.Jpeg;

    const string imagePath = "screenshots";
    Directory.CreateDirectory(imagePath);

    var testName = $"{testContext.TestName}_{DateTime.Now:yyyyMMddTHHmmss}";
    var filename = $"{imagePath}\\{testName}.{format.ToString().ToLower()}";
    xrmBrowser.TakeWindowScreenShot(filename, format);

    _testContext.AddResultFile(filename);
}


Videos

As it turns out this is really easy to set up (assuming tests are running under Visual Studio 2017 v15.5+). You’ll need to add a .runsettings file to your solution. To do so, right-click on your solution and add a new item (XML file) and make sure it’s named something like settings.runsettings, the .runsettings extension being key. Make sure this file gets into source control.

The basic content to get recordings working:

<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
   <RunConfiguration>
     <ResultsDirectory>.\TestResults</ResultsDirectory>
   </RunConfiguration>
   <DataCollectionRunSettings>
     <DataCollectors>
       <DataCollector uri="datacollector://microsoft/VideoRecorder/1.0"
                      assemblyQualifiedName="Microsoft.VisualStudio.TestTools.DataCollection.VideoRecorder.VideoRecorderDataCollector, Microsoft.VisualStudio.TestTools.DataCollection.VideoRecorder, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
                      friendlyName="Screen and Voice Recorder">
         <!--Video data collector was introduced in Visual Studio 2017 version 15.5 -->
       </DataCollector>
     </DataCollectors>
   </DataCollectionRunSettings>
</RunSettings>


In Visual Studio, under the Test menu –> Test Settings select Select Test Settings File and choose the new .runsettings file.

And it should be as simple as that. After a test executes in the test results folder there should be a sub-folder with a guid for a name and inside it will be a .wmv file which is the recording of the test being run in the browser. The one downside is that it appears to be an all or nothing approach to capturing video. If this is enabled all tests in the run have individual recordings created. I couldn't find a way to start/stop recording on the fly from code.


Azure DevOps

To make use of the images and videos when running tests from Azure DevOps you’ll need to make a couple adjustments.

In the Visual Studio Test task under Settings file choose the .runsettings file you created.
Also if you aren’t already using a Publish Build Artifacts step after the tests run, do so and publish the TestResults folder.

Now any images that were added to the TestContext or videos captured will be available in the test results.


Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 3

$
0
0
Part 1 covered the reasoning for choosing a container and the performance gains that resulted in the build process.

Part 2 reviewed the components that were chosen to be installed to support the build process.

Setup

Maybe a little bit out of order but this is going to focus on getting up and running with containers on Windows. Again there’s almost certainly different ways but this is what I ended up doing.

Download & install Docker Desktop for Windows– you’ll need to create an account on Docker Hub before it will let you download. Select the option to use Windows containers (if you miss it you can switch later) during the install. When that’s done start it up and log in with your Docker ID. When you see it running in the system tray, right-click and see if any of the menu items say “Switch to Windows containers”. If so, select that. Either it doesn’t get set correctly during install or it’s doing something else that I’m unaware of.

If you want to create the image I was using, download the 3 files from Part 2  or get updated copies from the GitHub repo and put them in a folder. Edit start.ps1 with your Azure DevOps instance name and Personal Access Token.

Open a PowerShell prompt as Administrator and navigate to the new folder. 

Building the image

Here’s the command to start building the container. The period at the end is part of the command. The “-t” flag allows you to assign a name of your choosing and a tag like a version number to the image. The “-m” flag here is capping memory use to 2GB. This configuration will used any cached layers in the process to save time.

docker build -t d365agent:v1.1 -m 2GB .

If you don’t want to use layers from the cache for whatever reason you can disable the cache by adding the no-cache option.

docker build –no-cache -t d365agent:v1.1 -m 2GB .

Prior to doing this I’d recommend that if you are on a laptop or have power saving options enabled, you might want to adjust so that your machine doesn’t go to sleep while this is running. That seems to hang the process up and you’ll need to restart from the point at which that happened. If it does happen to stop, you can restart from where you left off by running the same command (with cache enabled). If you make changes prior to restarting, the process will start at the lowest level that was changed.

You should see it start downloading the different parts of the Windows Server image and then starting running through the installation steps defined in the Dockerfile. This process is going to take awhile the first time through since it’s going to download close to 6GB. Once it’s downloaded the base OS image will be cached for future use. Keep in mind as well all the items getting installed need to be downloaded so you might as well grab a coffee and come back because it’s going to be running for awhile.

When that finishes we now have an image that can be used in a container.

Creating and running a local container instance

You can create a new container instance using this image on your local machine using the following command. We’re telling it to automatically restart in case it gets shut down, specifying some names, and finally telling which image to use.

docker run -it -d --restart always --name "d365agent" --hostname "d365agent" d365agent:v1.1

Now that container instance is created you can use the start and stop commands to turn it on or off.

docker start d365agent

docker stop d365agent

If you want to access the filesystem of the running container use this command and you will be dropped into a PowerShell session in the container. This is good for verifying things got installed properly in case the logs during the build process don’t provide enough information.

docker exec -it d365agent powershell

Use the exit command to end the session. Be mindful of which filesystem you’re in, it’s easy to forget at times which you are connected to.

If you’ve provided the correct Azure DevOps information, you should be able to check the available agents and see your container. If you don’t specify an agent pool name in start.ps1, it will go into the default pool. Now when run a build and specify this agent pool it will run on your local machine.


Creating an Azure Container Registry

Next you’ll want to create an Azure Container Registry which will be private by default. There aren’t many options to pick from so I’m not going to cover it. If you want screenshots refer to the quickstart. The only special option I picked was to enable the Admin User. All that does is allow you to use the registry name as the username when publishing images. In reality you maybe wouldn’t want to do that for a production instance seeing as if someone figure out your registry name they would also then have the username leaving only the password to figure out. Unlikely but anything is possible. Otherwise it’s an individual login or service principal.

At this point you can go through the process of setting up GitHub integration and build your image in Azure but I’m just going to upload the one from my local machine that I’ve already created.

Again open a PowerShell prompt as Administrator and navigate to the folder with your Dockerfile.
First you’ll need to log into the Azure Container Registry with this command replacing MyRegistry & AccessKey with the appropriate values from Settings –> Access Keys.

docker login MyRegistry.azurecr.io -u MyRegistry -p AccessKey

Next tag your local image for the Azure Container Registry with this command. It’s pointing at the local image we created earlier and it’s going to be placing it into a repository named “d365/d365agent”. The first “d365” acts as a namespace to categorize different images.

docker tag d365agent:v1.1 MyRegistry.azurecr.io/d365/d365agent:v1.1

Then use this command to upload the image.

docker push MyRegistry.azurecr.io/d365/d365agent:v1.1

It will need to upload all the image layers so again there will be some waiting involved.

Creating an Azure Container Instance

The only thing left to do is create an Azure Container Instance from the image. If you navigate to Repositories you will see “d365/d365agent”. If you drill in further you’ll see the initial version of the container image. From there you’ll have a Run Instance option. Once again the steps are pretty simple but if you want the visuals check out the quickstart.

Make sure you choose Windows as the OS type since it’s not the default. Use at least 2 GB of memory and I’d try for 2 processor cores. You might be able to get away with 1 if you aren’t using EasyRepro. When I tried to run those tests with just 1 core it consistently failed. Take note of which Resource Group you use as you’ll need to go there to manage the container instance once it’s created.

Once that starts you can sit back once again and wait while the container is provisioned.

Once that’s finished you should see something like this in Azure DevOps.

If you aren’t actively doing anything with the container instance I’d shut it down to save money.

Get Latest Solution Patch Name With PowerShell

$
0
0

Someone might find this useful if trying to move solutions around using Azure DevOps & PowerShell. I wanted to automate export of the latest solution patch using the Microsoft.Xrm.Data.Powershell library as part of a build pipeline. It’s pretty straightforward but it requires the solution name, simple for the base solution but when dealing with a patch it’s not possible to predict what the name is going be in order to build it into your script.

I came up with this. Given a solution uniquename this sets build variables for the uniquename & version of the latest patch of a solution or the base solution's uniquename & version if no patches exist.


How to get Client ID and Client Secret Key in CRM 2016?

$
0
0

I have created Online Dynamics 365 30 days trial version.

I want Client ID and Client Secret Key to call Web API using OAuth 2.0. How to get them?

Using Chocolatey to Distribute Developer Tools - Part 1

$
0
0
First off, what is Chocolatey? Chocolatey is a package manager for applications. So NuGet for Windows quite literally as the package formats are the same. There’s one or more repositories (again it’s just NuGet) that keep track of all the packages available and then you connect to it using a client and download and install an application. It’s similar to NuGet in the sense that the community can contribute packages but there appears to be a lesser amount of contributions by “official” sources like Microsoft, Google, Apple, etc. However that doesn’t mean you aren’t able to download and install software from these companies from Chocolatey. Applications like Visual Studio, Chrome, and iTunes have all been download, most likely from a publicly available link, and converted into a package by someone in the community. I should call out right away that this isn’t meant to circumvent licensing or pirate anything. The original applications aren’t being modified in any way, just having some scripting added around it to automate installations. Since you’re basically relying on “some guy in his basement” to create these packages there is some amount of risk still that you could download something harmful. Granted the packages are virus scanned and community moderated but things can still slip through. No different than downloading something off NuGet, GitHub, or even the XrmToolBox for that matter. This just happens to be one of many approaches you could take if you are in a position to assert more control over the things your users / developers are downloading and installing. Making commonly used software easily available for people to install themselves lessens the need for them to search around the internet and potentially downloading something bad. Not to mention, self-service software installation can take some of the burden off the support team and free them up to do or things.

What goes into a package?


At the core there is a XML file which contains the metadata about the package, things like title, version, description, licensing info, dependencies, etc. can all be found here. This will be the source of information everyone sees in the download feeds. Instead of DLLs like a typical NuGet package, you’ll usually have an executable file of some sort or the files needed to run the application if an actual installation isn’t required. Around this is wrapped some PowerShell code which takes the downloaded content and performs the tasks required to get the application to a useable state. Ideally there should be no interaction from the user as one of the big selling points of Chocolatey is for system administrators to use it to silently install and manage software across a large number of machines. Each package is versioned by the repository so you can always go back to a previous version if need be and just like NuGet packages are immutable. Once it’s published that’s it, no changing it without increasing the version number. It can be painful if you’re developing packages but for consumers it can prevent the old bait and switch by tricking people into downloading something bad which once worked perfectly fine. Chocolatey will also keep track of what is installed on your machine and make updates available as they as released. Additional PowerShell can be put in place to run during the uninstall process so package creators can add any code required to assist with any cleanup that might need to happen on top of the application’s normal removal process.

But I’m a Dynamics person and not a system administrator


True, this is maybe geared a little more toward organizations rather than individuals but that doesn’t mean it couldn’t be put to use for personal use. For developers, just think about the next time you need to rebuild you primary development machine. Hunting down all the installs and clicking through everything is about a day long process, at least it is when I need to do it.

This is my example use case – creating a way to more easily install the Dynamics 365 CE SDK tools (Plug-in Registration Tool, Package Deployer, etc. ). Back when there was a single download for the SDK it was easy because all the tools were right there. I can understand why managing that was probably difficult and going to a solely online only based SDK was the way to go but it also introduced some new challenges making these tools available. The new process is documented on how to download them from NuGet using PowerShell but I’ll still say it would have been easier just to point people to the manual download link and then rename .nupkg to .zip and be done with it. Imagine this situation, you’re dealing with someone who isn’t really a developer but has just enough knowledge of the platform to know how plug-ins work and you’re trying to describe to them over the phone how to edit a plug-in step. You’ll end up asking them if the have the Plug-in Registration Tool installed and of course they’ll say “no” so then you start telling them them need to download it from NuGet and they’ll usually end up responding something to the effect of, “WTH is nugget?”. So then you just ask for remote access and do it yourself.

Getting started


Install Chocolatey
https://chocolatey.org/install
Now you’re probably thinking that this is all command line stuff that you don’t want to memorize just to make it “easier” to install the latest version of an application. Not to worry, there is a UI that can be installed to make things a little more friendly.

Install Chocolatey UI
https://chocolatey.org/packages/ChocolateyGUI
At this stage you should have Chocolatey up and running and can download packages the community has already made available.

Creating and hosting your own packages


You could go through the process of creating a package and then upload it to the main Chocolaty site but obviously if you’re creating something solely for personal or private use, public hosting it isn’t an option. The other thing I’d throw out there is that it might be a violation of some terms of service or other legalese somewhere that forbids redistributing copyrighted material in this way, so if you don’t want “the lawyers” giving you a call you better look for another alternative.

Remember way back in the first paragraph where I mentioned that Chocolatey and NuGet are basically the same thing? That means there are a number of ways to create your own private NuGet server. The software itself is open source so it if want to stand up your own copy and manage the hosting that is an option. There are also some paid services / products which may work for you. ProGet offers a commercial product that you run on your own server (which is a pretty good deal if you’ve got a lot of users and need AD integration) or MyGet which is a cloud based offering that has paid and free plans.

Instead of those I’m going to focus on using Azure DevOps to host packages & feeds and to automate the package maintenance process. Azure DevOps is free up to 5 users or included with certain MSDN subscriptions. Package management (Azure Artifacts) is a paid add-on which is available with the same free access and then charges per user, per month after that which last time I checked starts to get pricey for when you’ve got a lot of developers.

In part 2 I’ll go into the project & package set up and the code used to keep the packages up to date. As I mentioned earlier there will be examples showing how to convert the SDK Tools into packages as well as how you can replicate packages from other feeds (like Chocolatey.org) for use in your own curated feed. Part 3 will covert how to create Artifacts in Azure DevOps and how you can use a Build Pipeline to watch for new versions and automatically update the packages. That last part is probably the most important. Creating an installable package to only use it once is a waste of time. The real benefits come from installing or updating multiple times on multiple machines.

Using Chocolatey to Distribute Developer Tools - Part 2

$
0
0
This is going to focus on the Chocolatey package set up and the code used to build updated packages.

Project structure


A single git repository which contains folders for each package / application / tool. 
Each package folder contains the following:
  • .nuspec file which provides the metadata about the package
  • PowerShell file containing the scripts that will go into the Azure DevOps Build Tasks to:
    1. Build the updated package
    2. Push to the NuGet feed
    3. Commit the updated files back to source control
  • Tools folder containing 2 PowerShell files
    1. chocolateyInstall.ps1 which handles the installation once it’s on the target machine
    2. chocolateyUninstall.ps1 which handles the uninstallation from the target machine

.nuspec file


You can choose to change the metadata to whatever you’d like but I’m going using what the original packages contains. Later you’ll see in the code used to update the packages, I’m just reading from the downloaded content and updating the matching elements.
The package <id> you’ll want to change to something unique, at the very least a variant of the original. Assuming you’re running the package update process on the same place as your feed (like Azure DevOps) you might run into an issue where it uses your private feed first to retrieve the package as opposed to the public source you actually intended. I’m guessing that’s by design but the issue I ran into popped up in the script comparing the version in the .nuspec in source control versus the package that is publicly available. Since we aren’t specifying a version (because we won’t really know what it will be once this process is automated) it’s pulling back the first copy it finds based on the registered sources. When the compare happens it sees the same version and aborts because it doesn’t show there’s an update needing to happen.

If you’re going to have the process use the public package for metadata you can get away with filling in any dummy data and after the first pass it will get updated to the real values. If you want to look at what’s there ahead of time you could use this simple PowerShell to retrieve the package metadata.


Be careful of the version numbers as once you’ve pushed to the feed you cannot delete a package, you can only un-list it or publish a new version with a higher version number. If you use your own version numbering scheme it’s probably not as big a deal but more than likely you’ll want to use the same version number as the public package.

Full .nuspec documentation: https://docs.microsoft.com/en-us/nuget/reference/nuspec

Package building scripts



Part 1 – building the package


This is used in a PowerShell build task and starts off with variables for the public package we’re using as a base and the corresponding file/folder locations in the project.

Next is the version comparison between the latest version of the public package and what is currently in the .nuspec file. The Plug-in Registration Tool is on NuGet so we use NuGet.org as a source. If you’re using a Chocolatey packages as the source then you’ll need to use Chocolatey.org as the source. Using Azure DevOps, NuGet is already registered as a source but Chocolatey is not in which case you’ll need to use Register-PackageSource and target Chocolatey.org (example). Once the comparison is made if the version hasn’t increased then the process stops. Before doing so a Pipeline Variable is set so we can use it to prevent any future build tasks from running. The other thing of note is the <files> section which determines what should be included when the package command is run. In this case we want just the tools folder and the .zip file containing the actual content and we’ll be ignoring the readme file and this PowerShell script file.

Once it’s been determined an update is needed, the public package is downloaded. As part of the process I’m giving it common name that corresponds to a value in the install file. This is solely to have one less thing to change when reusing this code.
From the package used in the version comparison I’m updating the .nuspec in my package. This isn’t a necessity, just more for information purposes. I’m also setting a Pipeline Variable here with the new version number so I can use it in a later build task. 

At this point the only thing left is to run the Chocolatey command to create the package from the content of the current directory based on what is defined in the .nuspec file. Chocolatey is installed by default on the Azure DevOps hosted build agents but it you’re running this from anyplace else you’ll need to run the PowerShell command to install it first.

Part 2 – publishing the package


This is used with a NuGet build task to upload the completed package to the feed which is going to host it. Specific to Azure DevOps I’m using a Personal Access Token that has read/write access to Packages to authenticate since the feed isn’t public. Also note that I used the NuGet v2 endpoint as opposed to the newer v3 endpoint. The format looks like this:

https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v2

Here’s where you’ll run into a 409 error uploading the package if you try to upload a package with a version number that already exists. I’ll cover more about the feed set up in part 3 of this blog.

Part 3 – Committing the changes


This is used in a separate PowerShell build task run after publishing. Since I’ve made modifications to the .nuspec file they’ll need to be committed back to source control so the next time this runs the package process won’t run unnecessarily after a version comparison. As part of the comments I’m using the Pipeline Variable I set earlier with the new version number we’re updating to. In the Azure DevOps build you also need to enable to option Allow scripts to access the OAuth token so that we can pass the System.AccessToken variable in the request header to authenticate.

Installing & uninstalling


In this case these scripts are very simple. After a user chooses to install a package it’s downloaded to the local machine, at which point  chocolateyInstall.ps1 kicks off. The SDK tools aren’t installable applications so “installation” is just a matter of copying the files somewhere. I chose the user’s Application Data folder and am creating a folder structure that will support installing tools from multiple sources. The Chocolatey unzip command will handle extraction and creation of any folders in the destination path that may be required. This is just unzipping my renamed package file I downloaded from the original source. To complete things I using the Chocolatey create shortcut command to make a short cut on the user’s desktop to the folder containing the executables.

When things get installed, Chocolatey is keeping the original package around so it knows what is on the machine to determine if updates are available, do reinstalls, and to access the uninstall script. The location will be similar to: C:\ProgramData\chocolatey\lib\YourPackage

As you might have guessed chocolateyUninstall.ps1 will run when the user chooses to uninstall the package. Since it’s not an installed application I’m just deleting the things I created during the install and deleting the package from the lib folder. There are a number of different Chocolatey commands you could use to uninstall but at the time when I worked on this I couldn’t get the one I believe I was supposed to use to work so I went down this path. For the full list of Chocolatey commands, check out their documentation.

That covers the package creation process. Part 3 will look at the setting up Azure Artifacts & Build Pipeline to keep things continuously updated as well as how to use the new feed in the Chocolatey clients. In case you missed in, part 1 gave an overview of Chocolatey and why you might want to use it in the Dynamics 365 space.

I’ve got everything up on GitHub so you can refer to that incase I don’t end up explaining something clearly enough

https://github.com/jlattimer/D365Chocolatey

Visual Studio 2017 - Dynamics 365 SDK cant connect to server

$
0
0

1. I downloaded the Developer extensions and the developer toolkit. (vsix for dynamics)

2. Updated the manifest with the version 15 instead of the 14.

3. Got both installations completed 

4. Instead of map the folders installed I mapped the path to the most recent version of dlls.

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/download-tools-nuget

And now I cant make the connection. When I tried to start a new project I get the window that asks if you want to reconnect to crm and it shows my server: organization: and user: do you want to use this connection I click yes and then I get this error:

Using Chocolatey to Distribute Developer Tools - Part 3

$
0
0
This is going to focus on setting up Azure DevOps to host the NuGet package feed and keeping the packages up to date. In case you missed the other posts in this series, part 1 provided an overview of Chocolatey and why you might want to use it as a D365 developer and part 2 focused on the package content and the scripts used to automate creation.

Feed setup


In your chosen Azure DevOps project head to Artifacts. If you don’t see that as an option, make sure under Project Settings that Artifacts are turned on. From there it’s just a matter of thinking up a name for the new feed.  Once created, use the Connect to Feed option to retrieve the url. We need this for the NuGet build task to push the package once it’s created as well as connecting from the client. The v3 endpoint url will be displayed here but I’m using the v2 endpoint because it was the only way I could get it to work. It’s straight forward to convert from v3 to the v2 url.

v3: https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v3/index.json

v2: https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v2

If you haven’t created a Personal Access Token (PAT) that has read/write access to Packages yet, do so now.

Pipeline setup


I created a different build for each package in the project.
The source will be the git repository hosting the project. This way the first thing that will happen is the project code will be downloaded to the build agent. We’ll need to reference the files for building the package around whatever we download as well as to check the current version we last built versus the current public version.


Step 1: PowerShell Script: Build package


You can inline the code from #1 in a PowerShell build task. If it’s determined a new package is available, the script will download the latest version and build the Chocolatey package from it.

Step 2: NuGet: Push to feed


Originally I want to make everything into a single PowerShell script and just use that but I wasn’t able to get the authentication working using the ApiKey parameter along with a Personal Access Token. Luckily using the existing NuGet build task worked fine. Use the command from #2 and select custom as the type. Additionally we don’t want this step to run if a new package wasn’t built, so to prevent that expand Control Options on the task and then Custom Conditions. Then use this to check the Pipeline Variable created in step #1 to determine if the task should run or not with this snippet.

eq(variables['ContinueUpdate'], 'true')


Step 3: PowerShell Script: Commit Updates


Inline the code from #3 in another PowerShell task to make sure your updated .nuspec file gets back into source control so it can be used next time the process runs. You’ll also want to make sure that your .gitignore file excludes .zip and .nupkg files so we don’t accidently store redundant copies of those. Don’t forget to add the same Control Option change as step 2.

Step 4: Build Triggers


The last thing is to set up a schedule so it keeps itself updated, this can be done from the Triggers tab inside the build. I’m using the free hosted agent and it doesn’t allow parallel builds so I staggered the start times on the day of the week I have this running on. You could very well run them all at the same time and they’d just queue up and run one at a time anyway.

Done!

Chocolatey GUI


Here’s what you end up with after all this work. Click install and you’ll have the latest version of the tool installed in a few seconds. Click uninstall and with any luck (if you cleaned up after yourself properly) everything will be gone.





To connect to Azure DevOps go to Settings and then Sources.
  • Id: Display name for the feed
  • Source: The v2 Azure DevOps feed url
  • Username: Anything
  • Password: The Personal Access Token created earlier (or a different one with Read access to Packages). Using the normal username / password combination wasn’t working here.

A few other notes


The download counts, package size, or package image don’t get displayed. That seems to be an issue between Chocolatey GUI and Azure DevOps. Setting up a feed from a different private NuGet source didn’t do this.

Download all the code here: https://github.com/jlattimer/D365Chocolatey

Connect Dynamics CRM to Asp.Net Core 2.0

$
0
0

Can anyone refer me to documentation that shows how to connect an Asp.Net Core App to CRM online?

Things I found based on my research:

  1.  Web Api seems to be a logical way to get/query data from CRM.
  2. I haven't found a detailed document on how to get CRM to connect to Asp.Net Core application.

Thank you in advance.

Run Your Automated D365 CE UI Tests From Azure DevOps

$
0
0

EasyRepro

If you hadn't heard about it already, EasyRepro is a UI testing framework for Dynamics 365 CE built on top of Selenium which is one of the more popular UI testing frameworks available today. Those who might have tried using Selenium directly with D365 CE in the past only found pain and suffering. For most people that ultimately lead to the conclusion that it wasn't worth investing huge amounts of time creating tests for little upfront return. In my opinion EasyRepro now makes creating UI tests feasible as it abstracts away most of complexities involved with Selenium development and boils down CE interactions to a single line of code in many cases. At the same time it’s still flexible enough to extend or use underlying core Selenium functionality. If you’re at all interested, download the Git repository and give it a test drive.

Running Tests in Azure DevOps

Once you've cloned the EasyRepro project and ran some of the examples on your local machine you'll quickly realize that you can't tie it up all day running tests. So if you’re planning on running dozens or hundreds of these tests you’ll need to find an alternative place to run them from. Azure DevOps (ADO) can be that alternative because it offers solutions for 2 different scenarios when it comes to running automated tests. These being running a group of tests as part of a build & release pipeline and the other being able to have a non-developers run individual tests on demand.

Project Set Up

For example purposes I'll use a Visual Studio test project referencing the Dynamics365.UIAutomation.Api (EasyRepro) libraries from NuGet. I've additionally added in a reference to Microsoft.Azure.KeyVault to handle credentials so we don't need to worry them getting into source control or having to worry about replacing them in any of the ADO processes. For this example we’ll just be using Google Chrome as the browser of choice for testing.

Build Set Up

Assuming tests are created and things are working locally, get the project checked into ADO so we can set up a build. It's going to work like a normal .NET project build.


Start off by creating a new build in ADO using an empty job. I’ve got a repository with the EasyRepro test project that I’m pulling the source from initially. This is executing on a private build agent (running inside a Window container) so I’m just using the command line to do a NuGet restore but you could also use the standard NuGet build task. Build the solution or project like normal using the release configuration. Then probably the most important step, use the Publish Build Artifacts build task to publish the bin/release folder to Azure Pipelines/TFS. This is what makes the compiled assembly and other references available to the release we’ll be setting up to run this.

To keep things updated, under Triggers, check Enable continuous integration so that the project gets rebuilt and published each time an update is made.

Release Set Up

In order for ADO to be able to run tests on demand we need to create a release pipeline. Again don’t start with an existing template, just pick the empty job. Select Artifacts & Add and then choose Build and use your project and the build pipeline you just created as the source. You have a few different options for the default version to use but I’d probably set it to use the latest each time.


In the first and only stage, open the tasks to begin setting up the test run. I start with a Visual Studio Test Platform Installer task. You might need this if you’re running a private build agent. If you’re using a Microsoft hosted agent you shouldn’t need this because it’s already installed but it being there won’t hurt anything. Then add a Visual Studio Test task and user version 2.*. Pay close attention to the configuration. Select tests using aTest run. Make sure the box is checked that indicated the Test mix contains UI tests. Last, make sure the test platform version is using the version Installed by Tools Installer.




Depending on the build agent, you may or may not need to install Chrome (or other browsers for that matter). If things aren’t working, try installing it with the following PowerShell script:

$Path = $env:TEMP;
$Installer = "chrome_installer.exe";
Invoke-WebRequest "http://dl.google.com/chrome/install/375.126/chrome_installer.exe" -OutFile $Path\$Installer;
Start-Process -FilePath $Path\$Installer -Args "/silent /install" -Verb RunAs -Wait;
Remove-Item $Path\$Installer

Test Plan Set Up For Manual Test Execution

Hooking individual tests up to test cases is only required if you want the ability to run them on demand from the Test Plans area in ADO. It works better if you can link the release that was created to the test plan but I’m pretty sure you’ll need either VS Enterprise, Test Professional, MSDN or Test Manager to do so. If you’ve already got a test plan, right click on in and go to Test plan settings. Select the build that was just created and leave the build number at latest. Then choose the pipeline and stage from the release setup.


Assuming you’ve got a Test Case created that matches one of the EasyRepro tests, head back to the project in Visual Studio, open up Test Explorer, and find the test. Right-click on it and choose Associate to Test Case. It’s probably a good idea to make sure you’re connected to the project in ADO before you do this.



Type in the Test Case Id then Add Association. Save and close and when you open the test case in ADO, the automation status should now say ‘Automated’. Repeat for any other tests.

Under Test Plans you should be able to select individual or groups of tests and then under the green Run button pick Run for web application to execute. Having the release tied directly to the test plan saves the user having to choose that same data each time which is a few extra clicks and possible confusion.


A window will open confirming and validating the tests. If there aren’t any errors you can select View test run and sit back and wait for the tests to complete. With any luck they will, otherwise you’ll need to retrace your steps and figure out what’s wrong.



If you aren’t able to edit the test plan you can use Run with options and it will open a window where you can choose the items that got configured when linking at the test plan leave. When selecting the build the user will need to either know the build number or know enough to select find and the pick the correct (likely the most recent) build and pipeline from the list. 

Running In Builds & Releases Automatically

Really it’s just a matter of combining the all the steps from the Build and Release setups above with the exception of the publish build artifact task.


And that’s it. Not much to it once you figure it out so hopefully this saves you the trouble.

CRM Numeric Workflow Utilities v2.3.0.0

Call an external Web Service from Plugin - Dynamics crm ONLINE

$
0
0

Hi all.

I'm facing a Permission error when i try to call an external WebService from my plugin.

Notes:
1. The web service was developed by another company and they said that they could not put a friendly name (dns) for this url
2. The web service is on IP format (http://200.279.18......).

So, for a temporary solution, i did a Windows Scheduler console APP that runs every 1min calling the Web Service and update the Quote entity. But i need this web service executing on real time.

Any help?

Dynamics CRM 2016 (Online) WebAPI Query

$
0
0
I want to make string desc as int desc while using web api query.

ex) string desc -> 2, 10, 1
int desc -> 10, 2, 1



Query : new_projects?$select=new_dt_start,new_txt_project_no&$orderby=new_txt_project_no desc

Is there any way?

Run Automated Tests When Your D365 CE Org Gets Updated

$
0
0
With online D365 CE instances getting updates pushed regularly it makes it almost impossible to ensure that you’ve tested against the latest bits. That’s assuming you even know about each update since the majority go in silently in the background. Realizing that organizations need to test before updates roll out to production or at least be aware of issues caused by updates before users start reporting them, Microsoft has taken the first steps to allow you to get ahead of potential issues.
The Microsoft.Xrm.Tooling.Testing PowerShell library (in preview at time of writing) has been released which provides hooks into a few key backend operations that you hadn’t had access to previously. When one of these events occurs you have the opportunity to run automated tests.

Current backend triggers:
  • OrganizationDatabaseUpdate
  • OrganizationDatabaseUpgrade
  • OrganizationBitsUpdate
  • OrganizationPackageInstall
  • OrganizationPackageUninstall

How does it work?

To start, a developer will need to create a .NET unit test project that will contain any tests that need to be run. The tests really can’t be ‘unit tests’ because by definition unit tests are meant to be totally decoupled from everything. So running tests that don’t act on a live environment isn’t going to tell you anything you didn’t already know. Really what you are after would be better described as an ‘integration tests'. Tests that connect to the live environment that just registered the change are going to uncover things that might not be working. Depending on the complexity of your business and customizations it might be a little tricky to create meaningful tests. Once the tests are written you can use the provided PowerShell commands to upload the tests to the Microsoft environment and register when they should be run. Then when a trigger you’ve hooked into is tripped, your tests run.

Test suite setup

I’ll point out right now that the documentation doesn’t appear to be complete yet so there’s probably a better way to do some of these things but without knowing I just improvised to get things working.

For test purposes I created a unit test project that is only going to perform one operation which is to create a contact record. My ‘test’ consists of a CrmServiceClient connection that creates the record and a basic assert statement that passes the test if there aren’t any exceptions. I’ve also got an app.config file that contains the connection string populated with valid credentials. Also note that a better solution would be to use Azure Key Vault and have it read any credentials from there. It’s a bit more secure and allows for updating the values without having to re-deploy things.

Once you’ve got the code working, zip up the contents of the debug/release folder. I mentioned before that the tests get uploaded to Microsoft’s environment. Best I can tell is that the zip file containing the tests needs to be available via a URL on the public internet as opposed to uploading a local file (again maybe I’m doing it wrong). This appears to only be required initially as the documentation states, “Testing infrastructure will download and store the test suite ZIP file for future runs. Changes to this URL will not affect the test run. If you need to update the test suite, please run this cmdlet again pointing to the new test suite's URL”. I dropped my zip file on a blob store and generated a short-lived SAS token I could add on to the URL.

If you haven’t already done so, install Microsoft.Xrm.Tooling.Testing in PowerShell

install-module Microsoft.Xrm.Tooling.Testing

Then register a test suite to contain the test triggers. Based on how things get configured, a test suite will center around the tests being executed. If you’re going to want different sets of test to run for different triggers it looks like you’ll need separate suites.
  • $apiUrl = Online Management API Service URL
  • $orgId = D365 CE organization ID found on the Developer Resources page
  • $orgUniqueName = D365 CE organization Unique Name found on the Developer Resources page
  • $testSuiteUrl = URL of the unit test project zip file
  • $completionCallbackUrl = URL that will have basic test data sent to it when complete via webhook

$user = "admin@test.onmicrosoft.com"
$password = "password"
$cred = New-Object System.Management.Automation.PSCredential `
     ($user, (ConvertTo-SecureString $password –ASPlainText –Force))
$apiUrl = "https://admin.services.crm.dynamics.com"
$orgId = '38600bb5-ed48-47e4-8844-e05c5516b446'
$orgUniqueName = 'org1a1aa611'
# Test Suite
$testSuiteName = 'IntegrationTests'
$testSuiteUrl = 'https://mystorage.blob.core.windows.net/test/AutoTest1.zip?sv=SASKEY'
$testUserCredential = $cred
$completionCallbackUrl = 'https://prod-03.westus.logic.azure.com:443/workflows/...'
# Test trigger
$testTriggerName = 'IntegrationTests-OrganizationPackageInstall'
$testTriggerType = 'OrganizationPackageInstall'

Register-TestSuite -ApiUrl $apiUrl -OrganizationId $orgId -OrganizationUniqueName $orgUniqueName `
     -TestSuiteName $testSuiteName -TestSuiteUrl $testSuiteUrl -TestUserCredential $testUserCredential `
     -ApiCredential $cred -CompletionCallbackUrl $completionCallbackUrl


You should see a result of true be returned if successful.

Test trigger setup

Setup one or more of the backend triggers in the test suite that will execute your tests. Here the trigger is going to be a 'OrganizationPackageInstall' which I’ve assumed to mean when one of the system level solutions or a solution found in the D365 CE Administration Center is installed (and hopefully updated as well).

Register-TestTrigger -ApiUrl $apiUrl -OrganizationId $orgId -TestSuiteName $testSuiteName `
     -TestTriggerName $testTriggerName -TestTriggerType $testTriggerType -ApiCredential $cred


Again you should see a result of true returned if successful.

To confirm everything is set up correctly, you can run this command to see what is registered for a specific organization.

Get-TestSuites -ApiUrl $apiUrl -OrganizationId $orgId -ApiCredential $cred

It will show the suites and triggers that are currently configured.

Testing this out

You can manually trigger all the tests in a suite to ensure that everything is working.

Start-TestRun -ApiUrl $apiUrl -OrganizationId $orgId -TestSuiteName $testSuiteName `
     -ApiCredential $cred


This will return the ID of the test run.

You can check the status of a run at any point.

$testRunId = '2c5d1e62-ff35-43c3-9681-83de5fbb2235'
Get-TestRun -ApiUrl $apiUrl -OrganizationId $orgId -TestRunId $testRunId `
     -TestSuiteName $testSuiteName -ApiCredential $cred


This will output information about the run including status (running, succeeded, etc.). If complete it returns an event log but its purpose must be something other than reporting on the actual test results as it contains no references to what was executed.

Installing a solution from the Administration Center will trigger this particular event as well.

More about test results

From what I can tell so far, there isn’t a way to see if individual tests have passed or failed. That said, it might be best to add your own way of logging the test results so you can see exactly what passed and what didn’t. When the run completes it triggers a webhook with some basic information about the run. Lots of options open up to get notified so you can go and look at the results. I used a Flow which runs based on an HTTP request being received and I used the URL it generated as the CompletionCallbackUrl when registering the suite. Then it just sends an email notification with all the data.

EasyRepro tests

At the moment it doesn’t appear that whatever is running the tests is set up to handle EasyRepro / UI tests. All my attempts failed which again might mean I’m just doing it wrong. For v1 of this solution I can see where Microsoft wouldn’t have yet built out or allowed for the various supported browsers being installed on the infrastructure to allow for this sort of testing.

If you’re interested in running these types of tests you’re going to have to rely on your own test platform for the execution. In my case I’ve got EasyRepro tests set up to run in Azure DevOps (ADO). In order to get them to run based on one of these provided triggers you can either create a dummy unit test which uses the ADO APIs to execute a build or release directly or just pass the dummy test and have the webhook call a Flow where you use the built in ADO connector to kick something off.

Should the built in ADO connector not work for you, a HTTP action can also be used. Here's an example of queuing a build using the REST API.

POST  https://{your instance}.visualstudio.com/{project name}/_apis/build/builds?api-version=5.0

Headers
Content-Type: application/json

Authorization: Basic
Username: {anything}
Password: {Azure DevOps Personal Access Token}

Body
{
        "definition": {
            "id": {build definition id}
        }
}


Once that kicks off and runs its up to you to handle any further notifications and reporting on test results.

Cleaning Up Leftover WebDriver Processes

$
0
0
When developing tests using EasyRepro there are going to be plenty of times during the debugging process where you end up cancelling before the test runs to completion. When that happens the WebDriver for the browser you're testing on doesn't have the chance to shut down properly. The most obvious result of that is the browser being left open. Annoying, but easily fixed by just closing it. The other hidden side effect is that you end up with a number of leftover processes left open on your machine.



Probably not going to grind your machine to halt but it would be better if those processes got cleaned up without having to remember to manually go through and End task for every single one of them.

Fortunately there an easier (albeit not perfect) way to handle this. With a few lines of code you can check the running processes and stop any offenders. You'll notice that I added the code to the AssemblyInitialize method and not AssemblyCleanup. In a perfect world you'd do the clean up at the end after all tests have run. Unfortunately that won't work here, in the event you cancel a test or abort in some way AssemblyCleanup doesn't run. The next best thing is to run the clean up code before starting a new run of tests. Once this is in place you shouldn't have more than 1 process leftover at any given time.


using System.Diagnostics;

[AssemblyInitialize]
public static void AssemblyInitialize(TestContext testContext)
{
var chromeProcesses = Process.GetProcessesByName("chromedriver");
foreach (var process in chromeProcesses)
{
process.Kill();
}
var geckoProcesses = Process.GetProcessesByName("geckodriver");
foreach (var process in geckoProcesses)
{
process.Kill();
}
var ieDriverServiceProcesses = Process.GetProcessesByName("IEDriverServer");
// (Command line server for the IE driver)
foreach (var process in ieDriverServiceProcesses)
{
process.Kill();
}
}
Viewing all 869 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>