Microsoft Dynamics 365

For a general introduction to the connector, please refer to RheinInsights Microsoft Dynamics 365 Connector. This connector supports Microsoft Dynamics 365 Server Version 8.x and above.

Dynamics 365 Configuration

Our Microsoft Dynamics 365 Connector supports user based authentication against Dynamics and uses the REST APIs provided by your instance.

Therefore, it uses a crawl user for accessing the data. Authentication can take place via NTLM or Kerberos. We recommend that the user’s password does not expire.

Permissions

The crawl user needs to have the following read permissions.

  1. System users

  2. Teams

  3. Team members

  4. Business units

  5. Roles

  6. Role collections

  7. Role privileges

  8. Team roles

  9. User roles

  10. Accounts

  11. Addresses

  12. Annotations

  13. Phone calls

  14. Posts

  15. App modules

  16. Contacts

  17. Contracts

  18. Incidents

  19. KB articles

  20. Knowledge articles

  21. Leads

  22. Opportunities

  23. Sales orders

Active Directory

Due to the nature of Dynamics 365 on-premises user ids, being sAMAccountNames, it might be needed to map these to userprincipalnames, i.e., mail addresses. Therefore, a separate user must be used for user name mapping leveraging Active Directory. This user must have read access to the users in the global catalog of your Active Directory.

Please refer to Ldap/Active Directory Security Transformer for the according configuration instructions.

Content Source Configuration

The content source configuration of the connector comprises the following mandatory configuration fields.

Dynamics Configuration Dialog

  1. Base URL. This is the root url of your Dynamics instance. Please add it without trailing slash

  2. Authentication method. Here you need to choose NTLM or Kerberos.

  3. Crawl user. This is the login name of the crawl user. The user must have the permissions as described above and the user name must be provided as domain\samaccountname.

  4. Crawl user’s password. This is the according password of the crawl user

  5. Public keys for SSL certificates: this configuration is needed, if you run the environment with self-signed certificates, or certificates which are not known to the Java key store.
    We use a straight-forward approach to validate SSL certificates. In order to render a certificate valid, add the modulus of the public key into this text field. You can access this modulus by viewing the certificate within the browser.

image-20240928-201320.png
  1. Included types. This is a list of Dynamics Entities, which are included in a crawl. Indirectly, each entity is enriched, if applicable, with associated posts, notes, annotations or incident resolutions. The supported entity types are accounts, incidents, contacts, contracts, kbarticles, knowledgearticles, leads, opportunities, salesorders.

  2. Excluded attachments: the file suffixes in this list will be used to determine if certain documents should not be indexed, such as images or executables.

  3. Include post contents in crawling. If this is enabled, associated post entities are extracted and attached to the according parent entities as listed in included types.

  4. Include resolution contents in crawling. If this is enabled, resolution notes are extracted and attached to the associated incident entities.

  5. Include phone call and mail contents in crawling. If this is enabled, associated phone call entities are extracted and attached to the according parent entities as listed in included types.

  6. Include tasks contents in crawling. If this is enabled, associated task entities are extracted and attached to the according parent entities as listed in included types.

  7. Definition of custom object relationships. Here you can specify in a JSON format if you would like to augment the document bodies or document metadata by further fields. The connector can also perform a lookup against directly related objects. The format is the following

    {
      "account": { /* This is the name of the entity type */
        "relativeApiEndpointUrl": "/accounts", /* is the API endpoint which should be used. /accounts is a known endpoint and will be ignored */    
    "extendsExistingType": true, /* This value defines if an existing type (account) should be extended, if set to true it is important that account above matches an existing type. Otherwise the configuration will be ignored. */
        "limitReturnedItems": false, /* determines if the operation should not include as many results as possible*/
        "bodyFields": /* array of field name which should be included in the document body */
    [
          "customBodyField"     
    ],
    /* bodyContentLookupEntities is an array of referenced entities which should be included in the document body. Such as SELECT description FROM industries WHERE industry_id=_industry_id */
        "bodyContentLookupEntities": [
          {
            "relativeApiEndpointUrl": "/industries",
            "idFieldInForeignEntity": "industry_id",
            "idFieldInInnerEntity": "_industry_id”,
            "contentFields": [
              "description"
            ],
            "canBeCached": true
          }
        ],
    /* metadataLookupEntities is an array of referenced entities which should be included in the document metadata. Such as SELECT name FROM industries WHERE industry_id=_industry_id */
        "metadataLookupEntities": [ {
            "relativeApiEndpointUrl": "/industries",
            "idFieldInForeignEntity": "industry_id",
            "idFieldInInnerEntity": "_industry_id”,
            "contentFields": [
              "name"
            ],
            "canBeCached": true
          }],
        "entityType": "account", /* This value will be used for the click URL calculation */
        "parentType": "accounts",/* This value will become used in the parentItemUrl field of each document */ 
        "idField": "accountid", /* This value will become the unique id of each document */
        "titleField": "accountName", /* This value will become the title for each document*/
        "parentTitle": "Accounts", /* This value will become the parentItemTitle id of each document */
        "applicationPermissionRole": "prv_permission", /* This value will be used for the ACL computation. Here you need to specify the correct permission object */
        "hasAttachments": false /* Determines if annotations are given and attachments are to be expected for this object type */
      }
    }
  8. API Version. Please add the API version here which should be used by the connector. The connector then connects against <baseUrl>/api/data/<api version>/…

  9. Rate limit. This determines how many HTTP requests per second will be issued against Dynamics 365.

  10. Response timeout (ms). Defines how long the connector until an API call is aborted and the operation be marked as failed.

  11. Connection timeout (ms). Defines how long the connector waits for a connection for an API call.

  12. Socket timeout (ms). Defines how long the connector waits for receiving all data from an API call.

  13. The general settings are described at General Crawl Settings and you can leave these with its default values.

After entering the configuration parameters, click on validate. This validates the content crawl configuration directly against the content source. If there are issues when connecting, the validator will indicate these on the page. Otherwise, you can save the configuration and continue with Content Transformation configuration.

Recommended Crawl Schedules

Content Crawls

The connector supports incremental crawls. These are based on sorting as part of the Dynamics APIs, which is very limited and will generally only detect new entities but often not changed entities. Deletions are not detected in this mode at all. Incremental crawls should run every 15 minutes.

Due to the limitations of the incremental crawls, we recommend to run a Full Scan Crawl every few hours or daily.

For more information see Crawl Scheduling .

Principal Crawls

Depending on your requirements, we recommend to run a Full Principal Scan every day or less often.

For more information see Crawl Scheduling .