32 C
Dubai
Friday, May 16, 2025
Home Blog Page 55

Mailbox Replication Service was unable to connect to the remote server

Office 365 Mailbox Migration Error –

image

The migration encountered an error

The Mailbox Replication Service was unable to connect to the remote server using the credentials provided. Please check the credentials and try again. The call to ‘https://outlook.careexchange.in/EWS/mrsproxy.svc’ failed. Error details: The HTTP request is unauthorized with client authentication scheme ‘Negotiate’. The authentication header received from the server was ‘Negotiate,NTLM’. –> The remote server returned an error: (401) Unauthorized.. –> The HTTP request is unauthorized with client authentication scheme ‘Negotiate’. The authentication header received from the server was ‘Negotiate,NTLM’. –> The remote server returned an error: (401) Unauthorized. –> The Mailbox Replication Service was unable to connect to the remote server using the credentials provided. Please check the credentials and try again. The call to ‘https://outlook.careexchange.in/EWS/mrsproxy.svc’ failed. Error details: The HTTP request is unauthorized with client authentication scheme ‘Negotiate’. The authentication header received from the server was ‘Negotiate,NTLM’. –> The remote server returned an error: (401) Unauthorized.. –> The HTTP request is unauthorized with client authentication scheme ‘Negotiate’. The authentication header received from the server was ‘Negotiate,NTLM’. –> The remote server returned an error: (401) Unauthorized. –> The call to ‘https://outlook.careexchange.in/EWS/mrsproxy.svc’ failed. Error details: The HTTP request is unauthorized with client authentication scheme ‘Negotiate’. The authentication header received from the server was ‘Negotiate,NTLM’. –> The remote server returned an error: (401) Unauthorized.. –> The HTTP request is unauthorized with client authentication scheme ‘Negotiate’. The authentication header received from the server was ‘Negotiate,NTLM’. –> The remote server returned an error: (401) Unauthorized.

Solution –

While the mailbox remote move initiates from Office 365-It tries to Connect to the migration endpoint.

Looks like the migration endpoint password in wrong on On-Premises end.

Open Office 365 Exchange Portal – Recipients- Migration – Click on Migration endpoints .

image

Update the Migration Endpoint Password – which has access to On-Prem Mailboxes.

image

Rerun the migration .

Office 365 Hybrid Duplicate Mailboxes

User has Exchange 2013 Hybrid Configuration, With Azure Active Directory Password Sync.

User has Successfully Synced the OU’s and Provisioned Licenses to the Mailboxes.  One Weird thing happened, It has provisioned empty Mailboxes on the Cloud and User has Mailboxes in the On-Prem as well,Causing Duplicate Mailboxes.

Ideally this shouldn’t happen.

By Design – Assigning License for an On-premises mailbox in the Cloud.

It should prompt in Mail Setting “This User’s On Premises Mailbox hasn’t been migrated to Exchange Online. The Exchange Online Mailbox will be available after migration is completed.

image_thumb1

Weird – Assigning License for an On-premises mailbox in the Cloud. (Provisioning Empty Mailboxes in the Cloud Causing Duplicate mailboxes)

As the Mailboxes kept Provisioned for a day or so. The empty mailboxes received emails sent from Same domain Office 365 Users .

Took a Backup of emails using E-discovery search –

Easy Way – Before July1 2017 you have to use in-place eDiscovery & hold to search and download as PSTs

image_thumb[10]

Choose the mailboxes to be exported.

clip_image001_thumb[1]

Choose All Criteria

Note – Use Internet Explorer for the Export PST application to work.

clip_image001[7]_thumb[1]

clip_image001[9]_thumb[1]

As Specified After July you have to use the Security and compliance center to do the same process.

https://protection.office.com

image_thumb[8]

In the left pane of the Security & Compliance Center, click Search & investigation > Content search.

  1. On the Content search page, select a search.
  2. In the details pane, under Export results to a computer, click Start export.
  3. On the Export the search results page, under Include these items from the search, choose one of the following options:
  4. Under Export Exchange content as, choose one of the following options:
    • One PST file for each mailbox   Exports one PST file for each user mailbox that contains search results. Any results from the user’s archive mailbox are included in the same PST file.

Lets see the Traditional Way to Copy Mails to Another Mailbox –

Added Office 365 Administrator to Discovery Management

Add-RoleGroupMember "Discovery Management" –Member admin@domain.onmicrosoft.com
New-Managementroleassignment –Role "Mailbox Import Export" –User admin@domain.onmicrosoft.com

image_thumb[2]

Close Powershell and Re-Opened Powershell

You can use –EstimateResultsonly switch to check the stats before run.

image_thumb[14]

Get-mailbox Test20 | Search-mailbox –searchquery {received:02/01/2013..01/17/2017} –TargetFolder Backup –TargetMailbox Backup@careexchange.in

For Example in my case.

image_thumb[4]

Exported to the Backup Mailbox

image_thumb[12]

To check precise items in the folders you can also use –

Get-mailboxfolderstatistics mailboxname | select Name,FolderSize,ItemsinFolder

image_thumb[16]

Now mailboxes are Backed up using PST or Copied to different mailbox.

Re Ran the Hybrid Configuration wizard from Exchange 2013 Server to make sure things are fine.

Good to know – Now If you have customized the co-existence connectors .It puts back to default hybrid configurations. In my case I couldn’t use TLS in a specific site and manually specified public ips in mail flow connectors.

re running the hybrid configuration put me back to TLS.

image_thumb[18]

Now Remove the License from the Mailboxes using GUI or Connect to MSOL for Bulk modifications.

Connect-MsolService
Set-MsolUserLicense –UserPrincipalName test@careexchange.in -RemoveLicenses "orgname:ENTERPRISEPACK"

To list licensed users

Get-Mailuser | Where-object{$_.Islicensed –like “True”} | FT UserPrincipalName,Licenses

To Check and Remove the right licenses .

image_thumb[19]

Now All duplicate mailboxes should have been converted to mail users. Before migrating them back make sure you permanently delete them from the deleted mailbox list.

if they exist in SoftDeletedMaibox List . Permanently remove them before you try Migrating them back again.

Using Office 365 PowerShell –

To List Softdeleted mailboxes –

Get-Mailbox –SoftDeletedMailbox

To Permanently Delete All SoftDeletedMailboxes – (Be Careful on this)

Get-Mailbox –SoftDeletedMailbox | Remove-Mailbox –PermanentlyDelete

image_thumb[21]

Using Msol service

Connect-MsolService

Make sure it doesn’t return any deleted users as well. which has been duplicated

Get-MsolUser –ReturnDeletedUsers

To Remove Deleted Users  –

Remove-MsolUser -UserPrincipalName test@careexchange.in –RemoveFromRecycleBin

To Remove All Deleted Users  – (Be Careful on this)

Get-MsolUser -ReturnDeletedUsers | Remove-MsolUser -RemoveFromRecycleBin –Force 

image_thumb[24]

Now We are good to go to migrate them back again.

Still having issues ?

Remove MSOL user from Cloud. To sync back the Object again.

Get-MsolUser -UserPrincipalName user@careexchange.in

Removing MsolUser  (Be Careful on this)

Get-MsolUser -UserPrincipalName user@careexchange.in | Remove-MsolUser

Removing MsolUser from RecycleBin (Be Careful on this)

Get-MsolUser -UserPrincipalName user@careexchange.in -ReturnDeletedUsers |
Remove-MsolUser -RemoveFromRecycleBin

 

Once you remove. Force Sync or Wait for the Normal Sync  Interval to happen.

Import-Module ADSync
Start-ADSyncSyncCycle -PolicyType Delta

Workaround Which worked out in Some Environments – 

Use Exchange On-Premises in

Remove the license for the user, wait for few minutes , then log into the Exchange Control Panel (Office 365 TAB) on your on-premises exchange server and initiate remote move from there.

Choose Remote Move Migration

Good to know –

Compared Immutable IDs – Looks Same.

$immuOnPremID – is OnPrem Immutable ID.

$immuCloudID is Cloud Immutable ID

Import-Module ActiveDirectory
Import-Module ADSync
$cred = Get-Credential

Connect-MsolService -Credential $cred
$GUIDbyte = (Get-ADUser TestUser).objectGUID.ToByteArray()

$immuOnPremID = [System.Convert]::ToBase64String($GUIDbyte)
$immuCloudID = Get-MsolUser -UserPrincipalName Testuser@careexchange.in | Fl ImmutableId

To Change Immutable ID for Specific User –

Set-MsolUser -UserPrincipalName Testuser@careexchange.in -ImmutableId $immuID

See also –

Office 365 Hybrid Configuration Wizard Step by Step

Adding Domain in Existing Hybrid Configuration

 

Build Your Own LAB – Cross-Region Replication for AWS S3

Cross-Region replication feature enables asynchronous automatic replication of copying objects between two AWS regions. You need to enable this feature on bucket level and configure on source bucket. This feature allows you to either select the whole source or subset of an object to replicate on destination region.

image

Important things to keep in mind:

· Cross-region will not work if versioning feature is not enabled on the bucket

· You cannot replicate bucket within the same region and hence it must be between two different regions

· Existing files within in the bucket are not replicated, when enabled cross-region feature. All the objects and subset which are created post enabling cross-region replication will be replicated.

· You cannot replicate the S3 bucket to the third region

· Objects which are marked as deleted are also replicated

Let’s do the LAB, we will create two S3 bucket called apmumbucket (source in Asia Pacific India region) & apsgbucket (destination in Asia Pacific Singapore region) and using the cross-region replication feature will replicate the object.

Go to the Amazon S3 console and create a new bucket

image

Create a bucket called apmumbucket and select the region Asia Pacific (Mumbai)

image

Versioning is must and hence we will enable it and proceed further

image

Enabling read permission here is not mandatory but just to access from internet I am selecting read permission, you can ignore this option

image

Review the settings and click on create bucket

image

Let us upload one file

image

Observe the file is uploaded in the bucket called apmumbucket

image

Let us now create another bucket in Singapore region where we want to replicate the content from Mumbai Region. Just review the summary as I am creating a bucket called apsgbucket

image

Now we can see both the buckets are created in cross-region (Mumbai & Singapore) & let’s enable the feature

image

Go to the bucket property of apmumbucket which is our source and look for the option cross-region replication

image

Enable it and verify the below settings:

· Source – Select Asia Pacific (Mumbai)

o Whole Bucket – Under this bucket we have the option to select the whole or sub buckets if available. We will select whole bucket

· Destination – Select Asia Pacific (Singapore)

o Bucket name – The bucket in which you want to replicate. We will select the one which we created apsgbucket

· Destination Storage Class – Type of storage class what we learned in previous article. Since this is the secondary or replica copy we will select infrequent access class.

· Role – Either it will create by default one for you or we can create our own. Will leave the default settings.

image

You can select other than whole bucket

image

You have the option to select different storage class

image

Now the cross-region replication is enabled

image

Let us see now if we can see anything under the bucket apsgbucket as we have uploaded one file as mentioned above. Any object prior to enabling the cross-region feature isn’t replicated.

image

Let us know try to upload a new file in bucket apmumbucket and see if it replicates on the bucket apsgbucket

image

Verify the settings and upload

image

File upload is successful in bucket apmumbucket, let us now quickly go to the bucket apsgbucket to see if the file is replicate or not.

image

The file now is automatically replicated

image

Also there is only one version of the file. Now let us download the file from bucket apmumbucket (APAC Mumbai region) and modify the content and upload another version. We will see if that is replicated too in Singapore region bucket

image

There is a file name called test file.txt, download the file and modify it. Save it and again upload it to the bucket apmumbucket

image

image

We see there are now two versions for the file test file.txt

image

Notice the same file replicate with the latest versions too 😀

image

Remove Windows Server Backup Versions using PowerShell

Don’t know that if Windows Server Backup Disk Space Management Changed in Server Windows Server 2012 R2. Some servers It overwrites the backups properly without filling the disk. In some environments it doesn’t .

But lets see how to remove backups to get some free space. to keep it going.

To know how many Backup Copies we have in the Current machine –

(Get-WBBackupSet –MachineName DS001).count

image

To keep Latest 15 Versions and remove the rest.

Remove-WBBackupSet -MachineName DS001 -KeepVersions 15

image

Resource Pressure in Exchange Server

Let me explain about the current situation .

–  Exchange Transport Service Keeps Crashing

“Windows Could not Start the Microsoft Exchange Transport Service on Local Computer. Error 1053 – The Service did not respond to the start or Control request in a timely Fashion.”

image_thumb1

– Queue Database Will rapidly increase and Transport service will stop.

Queue Database Default Location –

"C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue"

And We can see the resource pressure / Back pressure event in Exchange Server  –

I strongly feel if we can understand the event properly and spend some time on it . We can reach a solution . or any Transport Service Event if you don’t receive this one.

Log Name:      Application
Source:        MSExchangeTransport
Event ID:      15004
Task Category: ResourceManager
Level:         Warning
Computer:      EX01.careexchange.in
Description:
The resource pressure increased from Medium to High.

The following resources are under pressure:
Version buckets = 366 [High] [Normal=80 Medium=120 High=200]

The following components are disabled due to back pressure:
Inbound mail submission from Hub Transport servers
Inbound mail submission from the Internet
Mail submission from Pickup directory
Mail submission from Replay directory
Mail submission from Mailbox server
Mail delivery to remote domains
Content aggregation
Mail resubmission from the Message Resubmission component.
Mail resubmission from the Shadow Redundancy Component

The following resources are in normal state:
Queue database and disk space (“C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\mail.que”) = 77% [Normal] [Normal=95% Medium=97% High=99%]
Queue database logging disk space (“C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\”) = 77% [Normal] [Normal=95% Medium=97% High=99%]
Private bytes = 5% [Normal] [Normal=71% Medium=73% High=75%]
Physical memory load = 64% [limit is 94% to start dehydrating messages.]
Submission Queue = 0 [Normal] [Normal=2000 Medium=4000 High=10000]
Temporary Storage disk space (“C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Temp”) = 77% [Normal] [Normal=95% Medium=97% High=99%]

 

Aspects of the Event in my case – Things are Normal – Its not the disk or disk space issue .

if things are not normal below – Free some space or – Change the location of the queue database

Queue database and disk space = 77% [Normal]

Queue database logging disk space  = 77% [Normal]

Private bytes = 5% [Normal]

Physical memory load = 64% [Normal]

Submission Queue = 0 [Normal] [Normal]

Temporary Storage disk space = 77% [Normal]

Version buckets = 366 [High]

Restarted the server. Still no change.

Stopped the Transport Services – Cleared the Queue Folder data to a different location and started the service. (Risk of Losing Emails if Queue is not 0 or Not Processed Mails)

Now we have tried recreating the transport Database. still Same issue seems to be re-appearing. (Queue Database will rapidly increase and transport service will stop)

Now we need to check what’s been getting submitted to the transport database making it grow large.

Lets see Message Tracking Logs –

get-messagetrackinglog -resultsize unlimited -start "01/12/2017 00:00:00" | select sender, subject, recipients,totalbytes | where {$_.totalbytes -gt "1048576"}

Oh Shit. Some Crazy Guy Sent a 2 GB file as attachment. Oh wait . Why Exchange Allowed it ?

image_thumb3

Get-TransportConfig | fl

MaxSendSize – unlimited

MaxReceiveSize – unlimited

Set-TransportConfig –Maxsendsize 30MB –MaxReceiveSize 30MB

Restarted Transport Service.

image_thumb5

Now Seeing the user Mailbox – Respective Message is not in the Outbox. Respective Message is in the Sent Item

Disabled the Mailbox Temporarily – Before Disabling always –  Make sure Deleted Mailboxes retention is not Set to 0” in Mailbox Database properties.

Cleared Transport Database and temp folder and things are now normal .

Now Waited for few hours. Re-Enabled the mailbox

Things were normal . Removed the 2 GB email anyways from the mailbox – Don’t ask me why I removed. but I removed it.

At last it was some movie clips . Sent by User.  Be curious about the Max Send Size . It can really Screw up things big time. Can’t blame the end user.

Build Your Own LAB – AWS S3 Bucket & Versioning

In this article we will do LAB on AWS S3 Bucket & its Versioning. We will run through on how to create a S3 bucket and understand its permission so that we can delegate the right users to access what bucket they should access.

Also we will run through the feature called “Versioning” a powerful way in which you can protect the modification or deletion of the file. Versioning basically stores all versions of an object and consider it as one of the great backup tool. We will also go through the restoration process which is quite easy.

Go to the services dashboard and select S3 storage service

image

You will notice while creating a bucket it also asks for the region, I have selected my nearest region as Mumbai

image

You cannot use the name which is already been used as discussed in our previous article the bucket namespace is global and it must be unique. It seems mumbucket is already been created so let’s create bucket name called mumbucket1

image

If you notice mumbucket1 is created

image

Click on mumbucket1, on the right pane you will notice its attribute and values associated

image

As you see below storage management features are now available in new console, let’s click on Opt-In to do our further Lab step in new console

image

Below is the Amazon S3 new console look

image

After clicking on the mumbucket1, the features associated to the object will be displayed as mentioned below

image

Clicking on permission option shows who has access to the bucket and what kind of permission is assigned. Below is the default root permission which is applied when the bucket was created. You can add other AWS users or assign permission to everyone whom you want to allow access from internet.

image

Let us upload file and then assign permissions

image

Here is my file from the desktop called test.txt to upload

image

You can assign the permission here as who can access what, the root account that is Charles*** has permission by default and while uploading we can assign other users permissions too. The file will not be accessible from internet as it is private by default. Notice there are two types of permission and below is explanation.

· Object Permission – When you select read or write option the users has access not to read the document but to modify or delete.

· Object Permission – when you assign read or write the user will be able to view who all users have the permission to that particular object and can modify that permission too.

Let’s leave the default and click on next to continue

image

We have gone through the previous article on the storage class and their features for this particular lab we will leave default option and click on next to continue

image

Review the settings and click on upload

image

The file is uploaded in the bucket as mumbucket1

image

Click on the file test.txt and you will notice the link to access the file but when you try to access it throws access denied error because we had not assigned any permission to access from internet.

image

image

You can select option make public so that the file is accessible from internet

image

Now you see the permission is updated and AllUsers is updated with permission is set to read on Object access

image

Now when you open the test.txt file link again you will notice it is opening without any access denied error.

image

image

So far we have created a bucket called mumbucket1 and uploaded a test.txt file, let’s enable versioning on bucket and the file containing inside the bucket will have multiple version when changes are made to it. To enable the versioning all we have to do is under properties of the bucket enable the versioning radio button.

image

One thing you must be aware that once you enable the versioning you cannot disable it except suspend that means will going forward not keep multiple version of file once the versioning is suspended but which was already had different versions of file will be kept as it is. If you want to delete it, the process is to manually select the appropriate version which you want to delete.

image

Versioning is now enabled

image

Notice on top there is a tab called latest version, you can drop down for multiple version of a file.

image

Let us now download the test.txt file, modify the content and upload it

image

I have updated the file with next line – “updating the downloaded file”. Notice the file name must not be changed or you should not replace it with same name file.

image

Let’s set the file permission so that it can be accessible from the internet and upload it.

image

Now when you see the file properties and drop down the latest version tab you will notice there are two files. Remember it will double the size of the size in your S3 storage and you might want to consider before enabling versioning feature.

image

Now when you try to access the test.txt file link, notice the updated content inside the text file.

image

You can also select the respective versions and delete unwanted and let’s say we delete the latest version

image

The test.txt will be showing the original file as mentioned below.

image

Now let’s delete the test.txt file and try to restore it. Select the test.txt file and under more option select the delete option and the file will be deleted.

image

File is deleted now. I was going through the AWS technical documentation but was unable to find a guide on how to restore object from a new portal. For this lab we will be going back to old console of S3 storage service.

image

Now when you are on the old console select the bucket mumbucket1

image

You will notice versions: Hide / Show

image

Those files now which are deleted are actually marked as deleted not hard deleted. All we have to do is click on show and select the file which has a remark (delete marker) and through the action drop down select delete to move it from Show to Hide tab.

image

image

You will notice now the test.txt file has been restored successfully.

image

So far what we have learned:

· Creating a S3 bucket and checking its permission – very critical

· Securing S3 bucket using permission

· Enabling versioning on a bucket and checking multiple version of files

· Deleting and restoring file from the S3 bucket.

× How can I help you?