Quantcast
Channel: Infrastructure – rakhesh.com
Viewing all 75 articles
Browse latest View live

Generating certificates with SAN in NetScaler (to make it work with Chrome and other browsers)

$
0
0

I want to create a certificate for my NetScaler and get it working in Chrome. Creating a certificate is easy – there are Citrix docs etc for it – but Chrome keeps complaining about missing subjectAlternativeName. This is because Chrome 58 and upwards ignore the Common Name (CN) field in a certificate and only check the Subject Alternative Names (SAN) field. Other browsers too might ignore the CN field if the SAN field is present (they are supposed to at least); so as a best practice it’s a good idea to fill the SAN field in my NetScaler certificate and put all the names (including the CN) in this field. 

Problem is the NetScaler web UI doesn’t have an option for specifying the SAN field. Windows CA (which is what I use internally) supports SAN when making requests, but since the CSR is usually created on the NetScaler and that doesn’t have a way of mentioning SAN, I need an alternative approach. 

Here’s one approach from a Citrix blog post. Typically the CLI loving geek in me would have taken that route and stopped at that, but today I feel like exploring GUI options. :)

So I came across the DigiCert Certificate Utility and a guide on how to generate a CSR using that. I don’t need to use the guide entirely as my CA is internal, but the tool (download link) is useful. So I downloaded it and created a certificate request. 

A bit of background on the above. I have two NetScalers: ns105-01.rockylabs.zero (IP 10.10.1.150) and ns105-02.rockylabs.zero (IP 10.10.1.160) in an HA pair. For management purposes I have a SNIP 10.10.1.170 (DNS name ns105.rockylabs.zero) which I can connect to without bothering which is the current primary. So I want to create a certificate that will be valid for all three DNS names and IP addresses. Hence in the Subject Alternative Names field I fill in all three names and IP address – note: all three names including the one I put in the common name, since Chrome ignores this field (and other browsers are supposed to ignore the CN if SAN is present).

I click Generate and the tool generates a new CSR. I save this someplace. 

Now I need to use this CSR to generate a certificate. Typically I would have gone with the WebServer template in my internal CA, but thing is eventually I’ll have to import this CSR, the generated certificate, and the private key of that certificate to the NetScaler – and the default WebServer template does not allow key exporting. 

So I make a new template on my CA. This is just a copy of the default “Web Server” template, but I make a change to allow exporting of the private key (see checkbox below).

Then I create a certificate on my CA using this CSR. 

certreq -attrib "CertificateTemplate:WebServer_withKey"

The template name “WebServer_withKey” is the name of the template. Need to use that with the certreq command instead of the display name. 

This will create the certificate and save it at a location I specify. 

At this point I have the CSR and the certificate. I can’t import these into the NetScaler as that also requires the private key. The DigiCert tool generates the private key automatically and keeps it with itself, so we need to import this certificate into the tool and export with key from there. This exports the certificate, along with key, into a PFX format. 

This Citrix article is a good reference on the various certificate formats. It also gives instructions on how to import a PFX certificate into NetScaler.

Before proceeding however, a quick summary of the certificate formats from the same article for my own reference:

  • PFX is a format for storing a server certificate or any intermediate certificate along with private key in one encrypted file. 
    • PFX == PKCS#12 (i.e. both terms can be used interchangeably). 
  • PEM is another format. And a very common one actually. It can contain both certificates and keys, or only either separately. 
    • These are Base64 encoded ASCII files and have extensions such as .pem, .crt, .cer, or .key. 
  • DER is a binary form of the PEM format. (So while PEM formats can be opened in Notepad, for instance, as a text file, DER format cannot). 
    • These are binary files. Have extensions such as .cer and .der. (Note: .cer can be a PEM format too).

So I go ahead and import the PFX file.

And then I install a new certificate created from this imported PFX file. 

Note: After taking the screenshot I changed the first field (certificate-key pair name) to “ns105_rockylabs_zero_withKey” just to make it clear to my future self that this certificate includes the key with itself and that I won’t find a separate key file as is usually the case. The second field is the name of the PEM file that was previously created and is already on the appliance.

The certificate is successfully installed:

The next step is to go ahead replace the default NetScaler certificate with this one. This can be done via GUI or CLI as in this Citrix article. The GUI is a bit of a chore here, so I went ahead the CLI way. 

bind ssl service nshttps-10.10.1.170-443 -certkeyName ns105_rockylabs_zero_withKey
bind ssl service nsrpcs-10.10.1.170-3008 -certkeyName ns105_rockylabs_zero_withKey
bind ssl service nskrpcs-127.0.0.1-3009 -certkeyName ns105_rockylabs_zero_withKey
bind ssl service nshttps-::1l-443 -certkeyName ns105_rockylabs_zero_withKey
bind ssl service nsrpcs-::1l-3008 -certkeyName ns105_rockylabs_zero_withKey
bind ssl service nshttps-127.0.0.1-443 -certkeyName ns105_rockylabs_zero_withKey
bind ssl service nsrpcs-127.0.0.1-3008 -certkeyName ns105_rockylabs_zero_withKey

And that’s it! Now I can access my NetScalers over SSL using Chrome, with no issues. 


[Aside] Misc ADFS links

$
0
0

Update: To test ADFS as an end-user, go to https://<adfsfqdn>/adfs/ls/IdpInitiatedSignon.aspx. Should get a page where you can sign in and select what trusts are present.

Certificate stuff (as a note to myself)

$
0
0

Helping out a bit with the CA at work, so just putting these down here so I don’t forget later.

For managing user certificates: certmgr.msc.

For managing computer certificates: certlm.msc.

Using CA Web enrollment pages and SAN attributes requires EDITF_ATTRIBUTESUBJECTALTNAME2 to be enabled on your CA.

Enable it thus:

certutil -setreg policy\EditFlags +EDITF_ATTRIBUTESUBJECTALTNAME2
net stop certsvc
net start certsvc

When making a request, in the attributes field enter the following for the SANs: san:dns=corpdc1.fabrikam.com&dns=ldap.fabrikam.com.

 

Notes on ADFS

$
0
0

I have been trying to read on ADFS nowadays. It’s my new area of interest! :) Wrote a document at work sort of explaining it to others, so here’s bits and pieces from that.

What does Active Directory Federation Services (ADFS) do?

Typically when you visit a website you’d need to login to that website with a username/ password stored on their servers, and then the website will give you access to whatever you are authorized to. The website does two things basically – one, it verifies your identity; and two, it grants you access to resources.

It makes sense for the website to control access, as these are resources with the website. But there’s no need for the website to control identity too. There’s really no need for everyone who needs access to a website to have user accounts and passwords stored on that website. The two steps – identity and access control – can be decoupled. That’s what ADFS lets us do.

With ADFS in place, a website trusts someone else to verify the identity of users. The website itself is only concerned with access control. Thus, for example, a website could have trusts with (say) Microsoft, Google, Contoso, etc. and if a user is able to successfully authenticate with any of these services and let the website know so, they are granted access. The website itself doesn’t receive the username or password. All it receives are “claims” from a user.

What are Claims?

A claim is a statement about “something”. Example: my username is ___, my email address is ___, my XYZ attribute is ___, my phone number is ____, etc.

When a website trusts our ADFS for federation, users authenticate against the ADFS server (which in turn uses AD or some other pool to authenticate users) and passes a set of claims to the website. Thus the website has no info on the (internal) AD username, password, etc. All the website sees are the claims, using which it can decide what to do with the user.

Claims are per trust. Multiple applications can use the same trust, or you could have a trust per application (latter more likely).

All the claims pertaining to a user are packaged together into a secure token.

What is a Secure Token?

A secure token is a signed package containing claims. It is what an ADFS server sends to a website – basically a list of claims, signed with the token signing certificate of the ADFS server. We would have sent the public key part of this certificate to the website while setting up the trust with them; thus the website can verify our signature and know the tokens came from us.

Relying Party (RP) / Service Provider (SP)

Refers to the website/ service who is relying on us. They trust us to verify the identity of our users and have allowed access for our users to their services.

I keep saying “website” above, but really I should have been more generic and said Relying Party. A Relying Party is not limited to a website, though that’s how we commonly encounter it.

Note: Relying Party is the Microsoft terminology.

ADFS cannot be used for access to the following:

  • File shares or print servers
  • Active Directory resources
  • Exchange (O365 excepted)
  • Connect to servers using RDP
  • Authenticate to “older” web applications (it needs to be claims aware)

A Relying Party can be another ADFS server too. Thus you could have a setup where a Replying Party trusts an ADFS service (who is the Claims Provider in this relationship), and the ADFS service in turn trusts a bunch of other ADFS servers depending on (say) the user’s location (so the trusting ADFS service is a Relying Party in this relationship).

Claims Provider (CP) / Identity Provider (IdP)

The service that actually validates users and then issues tokens. ADFS, basically.

Note: Claims Party is the Microsoft terminology.

Secure Token Service (STS)

The service within ADFS that accepts requests and creates and issues security tokens containing claims.

Relying Party Trust

Refers to the trust between a Relying Party and Identity Provider. Tokens from the Identity Provider will be signed with the Identity Provider’s token signing key – so the Relying Party knows it is authentic. Similarly requests from the Relying Party will be signed with their certificate (which we can import on our end when setting up the trust).

Web Application Proxy (WAP)

Access to an ADFS server over the Internet is via a Web Application Proxy. This is a role in Server 2012 and above – think of it as a reverse proxy for ADFS. The ADFS server is within the network; the WAP server is on the DMZ and exposed to the Internet (at least port 443). The WAP server doesn’t need to be domain joined. All it has is a reference to the ADFS server – either via DNS, or even just a hosts file entry. The WAP server too contains the public certificates of the ADFS server.

Miscellaneous

  • ADFS Federation Metadata – this is a cool link that is published by the ADFS server (unless we have disabled it). It is https://<your-adfs-fqdn>/FederationMetadata/2007-06/FederationMetadata.xml and contains all the info required by a Replying Party to add the ADFS server as a Claims Provider.
    • This also includes Base64 encoded versions of the token signing certificate and token decrypting certificates.
  • SAML Entity ID – not sure of the significance of this yet, but this too can be found in the Federation Metadata file. It is usually of the form http://<your-adfs-fqdn>/adfs/services/trust and is required by the Relying Party to setup a trust to the ADFS server.
  • SAML endpoint URL – this is the URL where users are sent to for authentication. Usually of the form http://<your-adfs-fqdn>/adfs/ls.  This information too can be found in the Federation Metadata file.
  • Link to my post on ADFS Certificates.

[Aside] Various SharePoint links

$
0
0

Been dabbling in a bit of SharePoint at work, here’s some links I came across and want to put here as a reference Future Rakhesh:

[Aside] How to convert a manually added AD site connection to an automatically generated one

$
0
0

Cool tip via a Microsoft blog post. If you have a connection object in your AD Sites and Services that was manually created and you now want to switch over to letting KCC generate the connection objects instead of using the manual one, the easiest thing to do is convert the manually created one to an automatic one using ADSI Edit.

1.) Open ADSI Edit and go to the Configuration partition.

2.) Drill down to Sites, the site where the manual connection object is, Servers, the server where the manual connection object is created, NTDS Settings

3.) Right click on the manual connection object and go to properties

4.) Go to the Options attribute and change it from 0 to 1 (if it’s an RODC, then change it from 64 to 65)

5.) Either wait 15 minutes (that’s how often the KCC runs) or run repadmin /kcc to manually kick it off

While on that topic, here’s a blog post to enable change notifications on manually created connections. More values for the options attribute in this spec document.

Also, a link to myself on the TechNet AD Replication topology section of Bridge All Site Links (BASL). Our environment now has a few sites that can’t route to all the other sites so I had to disable BASL today and was reading up on it.

Asus RT-AC68U router, firmware, etc.

$
0
0

Bought an Asus RT-AC68U router today. I didn’t like my existing D-Link much and a colleague bought the Asus and was all praises so I thought why not try that.

Was a bit put off that many of the features (especially the parental control ones) seem to be tied up with a Trend Micro service that’s built into the router. When you enable these you get an EULA agreement from Trend Micro, and while I usually just click EULA agreements this one caught my eye coz it said somewhere that Asus takes no responsibility for any actions of Trend Micro and so they pretty much wash their hands off whatever Trend Micro might do once you sign up for it. That didn’t sound very nice. I mean, yes, I knew the router had some Trend Micro elements in it, and I have used Trend Micro in the past and have no beef with them, but I bought an Asus router and I expect them to take responsibility for whatever they put in the box.

Anyways, Googling about it I found some posts like this, this, and this that echoed similar sentiments and put me off. It was upsetting as a lot of value I was hoping to get out of the router was centered around using Trend Micro, and since I didn’t want to accept the EULA I would never be able to use it.

I briefly thought of flashing some other firmware in the hopes that that will give me more feature. Advanced Tomato looks nice, but then I came across Asus WRT Merlin which seems to be based on the official firmware but with some additional features and bug fixes and a focus on performance and safety rather than new features. (Also, the official Asus firmware and also the Merlin one have hardware NAT acceleration and proprietary NTFS drivers that offer better performance, while other third party firmware don’t have this. The hardware NAT only matters if your WAN connection is > 100Mbps, which wasn’t so in my case). Asus WRT Merlin looks good. The UI is same as the official one, and it appears that the official firmware has slowly embraced many of the newer features of Merlin. Also, this discussion from the creator of the Merlin firmware on the topic of Trend Micro was good too. Wasn’t as doom and gloom like the others (but I still haven’t enabled the Trend Micro stuff nor do I plan on doing so).

The Merlin firmware is amazing. Flashing it is easy, and it gives some nifty new features. For example you can have custom config files that extend the inbuilt DHCP/ DNS server dnsmasq, have other 3rd party software, and so on. This official Wiki page is a good read. I came across this malware blocking script and installed it. I also made some changes to DHCP so that certain machines get different DNS servers (e.g. point my daughter’s machine to use the Yandex.DNS). Here’s a bit from my config file in case it helps –

# Associate MAC address with IP address and lease period (optional).
# Note you can assign multiple MACs to the same IP (as I do for GAIA below).
# You can also assign a set/tag like I am doing here for GAIA.
# dhcp-host=[<hwaddr>][,id:<client_id>|*][,set:<tag>][,<ipaddr>][,<hostname>][,<lease_time>][,ignore]
dhcp-host=E8:03:2A:AE:29:40,C2:82:08:05:7B:75,192.168.1.156,set:MyLaptop,infinite

# Associate MAC address with sets/ tags so I can treat them differently later. Note I am not setting a specific IP here.
# Some docs & forum posts seem to omit the "set:" part. Maybe it's optional or a new feature.
# dhcp-mac=set:<tag>,<MAC address> (MAC can be a wildcard)
dhcp-mac=set:Kid,42:28:CA:D2:9C:FC
dhcp-mac=set:AppleTV,DC:56:E2:42:2D:C0

# Associate different options (mainly DNS) for the tagged hosts.
# dhcp-option=[tag:<tag>,[tag:<tag>,]][encap:<opt>,][vi-encap:<enterprise>,][vendor:[<vendor-class>],][<opt>|option:<opt-name>|option6:<opt>|option6:<opt-name>],[<value>[,<value>]]
dhcp-option=Kid,6,208.67.222.123,208.67.220.123
dhcp-option=AppleTV,6,185.37.37.37,185.39.39.39
dhcp-option=MyLaptop,6,8.8.8.8,8.8.4.4

This dnsmasq manpage was helpful, so was this page of examples. Also this StackOverflow post.

I liked this idea of having separate DHCP options for specific SSIDs, and also this one of having a separate SSID that’s connected to VPN (nice!). I wanted to try these but was feeling lazy so didn’t get around to doing it. I read a lot about it though and liked this post on having separate VLANs within the router. That post also explains the port numbering etc. of the router – its a good read. I also wanted to see if it was possible to have a separate VLAN for an SSID – lets say have all my visitors connect to a different SSID with its own VLAN and IP range etc. I know I can do the IP range and stuff but looks like if I need to do a separate VLAN I’ll have to give up one of the four ports on the back of the router. Basically the way things seem to be setup are that the 5 ports on the back of the router are part of the same switch, just that the WAN port is in its own VLAN 2 while the LAN ports are in their own VLAN 1.  The WLAN (Wireless) are bridged to this VLAN 1. So if you want a separate WLAN SSID with its own VLAN, we must create a new VLAN on one of the four ports and bridge the new SSID to that.

me@RT-AC68U:/tmp/home/root# robocfg show
Switch: enabled
Port 0: 1000FD enabled stp: none vlan: 2 jumbo: off mac: 00:00:5e:00:01:02
Port 1:  100FD enabled stp: none vlan: 1 jumbo: off mac: dc:56:e7:41:1d:c0
Port 2: 1000FD enabled stp: none vlan: 1 jumbo: off mac: e8:03:9a:ae:39:40
Port 3:   DOWN enabled stp: none vlan: 1 jumbo: off mac: 00:00:00:00:00:00
Port 4: 1000FD enabled stp: none vlan: 1 jumbo: off mac: f8:46:1c:d4:87:e5
Port 5: 1000FD enabled stp: none vlan: 2 jumbo: off mac: 60:45:cb:59:58:c8
Port 7:   DOWN enabled stp: none vlan: 1 jumbo: off mac: 00:00:00:00:00:00
Port 8:   DOWN enabled stp: none vlan: 1 jumbo: off mac: 00:00:00:00:00:00
VLANs: BCM5301x enabled mac_check mac_hash
   1: vlan1: 1 2 3 4 5t
   2: vlan2: 0 5

In the above port 0 is the WAN, port 1-4 are the LAN ports, and port 5 is the router itself (the SOC on the router). Since port 5 is part of both VLANs the router can route between them. The port numbers vary per model. Here’s a post showing what the above output might look like in such a case. As a reference to myself this person was trying to do something similar (I didn’t read all the posts so there could be stuff I missed in there).

Lastly these two wiki pages from DD-WRT Wiki are worth referring to at some point – on the various ports, and multiple WLANs.

At some point, when I am feeling less lazy, I must fiddle around with this router a bit more. It’s fun, reminds me of my younger days with Linux. :)

[Aside] Web Servers

$
0
0

I came across these recently and wanted to put them here as a bookmark to myself.

  • h5ai – A modern file browsing UI for web server. Looks amazing!
  • HFS – HTTP File Server. It’s a web server and also a way to send and receive files over HTTP. I haven’t used it by my colleagues recently did.
  • Fenix – A web server you can run on your desktop or laptop. Looks nice too!
  • TinyWeb – A very tiny web server you can run on your desktop or laptop.
  • Caddy – an HTTP/2 web server with automatic HTTPS. Got to check it out sometime.

HPE Synergy and eFuse Reset

$
0
0

In the HPE BladeSystem c7000 Enclosures one can do something called an eFuse reset to power cycle any the server blades. I have blogged about it previously here.

Now we are on the HPE Synergy 12000 Frames at work and I wanted to do something similar. One of the compute modules (aka server :p) was complaining that the server profile couldn’t be applied due to some errors. The compute module was off and refusing to power on, so it looked like there was nothing we could do short of removing it from the frame and putting back. I felt an eFuse reset would do the trick here – it does the same after all.

I couldn’t find any way of doing this via an SSH into the frame’s OneView (which is the equivalent of the Onboard Administrator in a c7000 Enclosure) but then found this PowerShell library from HPE. Now that is pretty cool! Here’s a wiki page too with all the cmdlets – a good page to bookmark and keep handy. Using this I was able to power cycle the compute module.

1) Install the library following instructions in the first link.

2) Login.

3) Get a list of the modules in the enclosure (not really required but I did anyways to confirm the PowerShell view matches my expectations).

4) Now assign the enclosure object containing the module I want to reset to a variable. We need this for the next step.

In my case the Synergy 12000 Frame (capital “F”) is made up of two frame enclosures. (The frame enclosure is where you have the compute modules and interconnects and frame link modules etc).  The module I want to reset is in bay 1 of frame 2. So below I assign the frame 2 object to a variable.

5) Now do the actual eFuse reset.

The -Component parameter can take as argument Device (for compute modules), FLM (for Frame Link Modules), ICM (for InterConnect Modules), and Appliance (for the Synergy Composer or Image Streamer). The -DeviceID parameter is the bay number for the type of component we are trying to reset (so -Component Device -DeviceID 1 is not the same as -Component ICM -DeviceID 1).

An eFuse reset is optional. You could do a simple reset too by skipping the -Efuse switch. The Appliance and ICM components only do eFuse reset though. I am not sure what a regular (non eFuse) reset does.

Couple of DNS stuff

$
0
0

So CloudFlare announced the 1.1.1.1 DNS resolver service the other day. Funny, coz I had been looking into various DNS options for my home network recently. What I had noticed at home was that when I use the Google DNS or OpenDNS resolvers I get a different (and much closer!) result for google.com while with other DNS servers (e.g. Quad9, Yandex) I get a server that’s farther away.

I was aware that using 3rd party DNS resolvers like this could result in me getting not ideal results, because the name server of the service I am querying would see my queries coming from this 3rd party resolver and hence give me a result from the region of this resolver (e.g. if Google.com has servers in UAE and US, and I am based in UAE, Google.com’s name servers will see that the request from www.google.com is coming from a server in the US and hence give me a result from the US thinking that’s where I am located). But that didn’t explain why Google DNS and OpenDNS were actually giving me good results.

Reading about that I came across this performance page from the Google DNS team and learnt about the edns-client-subnet (ECS) option (also see this FAQ entry). This is an option that name servers can support wherein the client can send over its IP/ subnet along with the query and the name server will look at that and modify its response accordingly. And if the DNS resolver support this, then it can send along this info to the name servers being queried and thus get better results. Turns out only Google DNS and OpenDNS support this and Google actually queries the name servers it knows with ECS queries and caches the results to keep track of which name servers support ECS. This way it can send those servers the ECS option. That’s pretty cool, and a good reason to stick with Google DNS! (I don’t think CloudFlare DNS currently does this, because I get non-ideal results with it too).

From this “how it works” page:

Today, if you’re using OpenDNS or Google Public DNS and visiting a website or using a service provided by one of the participating networks or CDNs in the Global Internet Speedup then a truncated version of your IP address will be added into the DNS request. The Internet service or CDN will use this truncated IP address to make a more informed decision in how it responds so that you can be connected to the most optimal server. With this more intelligent routing, customers will have a better Internet experience with lower latency and faster speeds. Best of all, this integration is being done using an open standard that is available for any company to integrate into their own platform.

While on DNS, I came across DNS Perf via the CloudFlare announcement. Didn’t know of such a service. Also useful, in case you didn’t know already, is this GRC tool.

Lastly, I came across Pi-Hole recently and that’s what I use at home nowadays. It’s an advertisement black hole. Got a good UI and all. It uses DNS (all clients point to the local Pi-Hole install for DNS) and is able to block advertisements and malware this way.

Etisalat and 3rd party routers

$
0
0

I shifted houses recently and rather than shift my Internet connection (as that has a 4 days downtime) I decided to apply for a new connection at the new premises (had an offer going on wherein the installation charge is zero) and then disconnect the existing connection once I have shifted. A downside of this – which I later realized – is that Etisalat seems to have stopped giving customers the Internet password.

Turns out Etisalat (like many other ISPs) now autoconfigure their routers. You simply plug it into the network and it contacts Etisalat’s servers and configures itself. This is using a protocol called TR-069, which I don’t know much of, but it seems to have some security risks. I have an Asus RT-AC68U router anyways which I have setup the way I want, so I wanted to move over from the Etisalat D-Link router to this one. When I spoke to the chap who installed my new Internet connection he said Etisalat does not allow users to install their own routers apparently. Found many Reddit posts too where people have complained of having to contact Etisalat and not been given this password and also about having to set a VLAN etc (e.g. this post). Seemed to be a lot of trouble.

Anyhow, I decided to try my luck. First I contacted them via email (care -at- etisalat.ae) asking to reset my password. A helpful agent called me up after a while and reset the password for it. It didn’t even affect my Internet connection coz the auto-configuring ensured that the Etisalat router picked up the new info. So far so good. I tried using these details with the Asus router to see if it will work straightaway, but it didn’t. So I sent them another email asking for the VLAN details. Next day another chap called me up and gave the VLAN details. He also mentioned that I’ll have to leave PnP on in my Asus router, or else he can raise a ticket to disable it. I said I’d like to have it disabled. About 4 hours later someone else called me up and said they are going to disable it now and would I like any assistance etc. I said nope, I’ll take care of it on my own.

Once they disabled PnP the Etisalat router stopped working. So I swapped it with the Asus one, and set the VLAN to what they agent gave me (it’s under LAN > IPTV Settings confusingly). I also changed the MAC of the Asus router to that of the Etisalat one – though I am not sure if that was really needed (I just did it beforehand, before unplugging the Etisalat router). This didn’t get things working though. Which stumped me for a while, until on a whim I decided to remove the VLAN stuff and just try with the username password like I had done yesterday. And yay that worked! So it wasn’t too much of a hassle after all. The phone and TV (eLife) still seem to be working so looks like I didn’t break anything either.

So, to summarize. If you want to use your own router with Etisalat (new connections) send them an email asking for the password to be reset and also make changes such as disabling Plug & Play so you can use your own router. Ask for the VLAN too just in case. Once you get these details connect the new router and put in the username password. If that doesn’t work put in the VLAN info too. That’s all! I was pleased with the quick turnaround and support, and it didn’t turn out to be a hassle at all like I was expecting. Nice one! :)

Asus RT-AC68U router, firmware, etc. (contd.)

$
0
0

Continuing a previous post of mine as a note to myself.

Tried to flash my Asus RT-AC68U with the Advanced Tomato firmware and that was a failed attempt. The router just kept rebooting. Turns out Advanced Tomato doesn’t work on the newer models. Bummer! Not that I particularly wanted Advanced Tomato. It looked good and I wanted to try it out, that’s all. Asus Merlin suits me just fine.

Quick shout out to “Yet another malware block script” which I’ve now got running on the Asus RT-AC68U. And I also came across and have installed AB-Solution which seems to be the equivalent of Pi-Hole but for routers. I got rid of Pi-Hole yesterday as I moved the Asus back to being my primary router (replacing the ISP provided one) and I didn’t want to depend on a separate machine for DNS etc. I wanted the Asus to do everything, including ad-blocking via DNS, so Googled on what alternatives are there for Asus and came across AB-Solution. Haven’t explored it much except for installing it. Came across it via this post.

That’s all for now!

As an aside, I feel so outdated using Linux nowadays. :( The last time I used Linux was 4-5 years ago – Debian and Fedora etc. Now most of the commands I am used to from those times don’t work any more. Even simple stuff like ifconfig or route print. It’s all System D based now. I had to reconfigure the IP address of this Debian VM where I installed Pi-Hole and I thought I could do it but for some reason I didn’t manage. (And no I didn’t read the docs! :p)

This is not to blame Linux or System D or progress or anything like that. Stuff changes. If I was used to Windows 2003 and came across Windows 2008 I’d be unused to it’s differences too – especially in the command line. Similarly from Server 2008 to 2012. It’s more a reflection of me being out of touch with Linux and now too lazy to try and get back on track. :)

IPv6 at home!

$
0
0

Whee! I enabled IPv6 at home today. :)

It’s pretty straight-forward so not really an accomplishment on my part actually. I didn’t really have to do anything except flip a switch, but I am glad I thought of doing it and actually did it, and pretty happy to see that it works. Nice!

Turns out Etislalat started rolling out IPv6 to home users in Dubai back in November 2016. I obviously didn’t know of it. Nice work Etisalat!

Also, my Asus router supports IPv6. Windows and iOS etc. supports IPv6 too, so all the pieces are really in place.

All I had to do on the Asus router was go to the IPv6 section, set Connection Type as “Native”, Interface as “PPP”, enable “DHCP-PD” and enable “Release prefix on exit”. DHCP-PD stands for “DHCP Prefix Delegation”. In IPv4 the ISP gives your home router a single public IP and everything behind the home router is NAT’d into that single pubic IP by the router. In IPv6 you are not limited to a single public IP. IPv6 has tons of addresses after all, so every device can have a pubic IP. Thus the ISP gives you not a single IPv6 address, but a /64 publicly accessible prefix itself and all your home devices can take addresses from that pool. Thus “DHCP-PD” means your router asks the ISP to give it a prefix, and “Release prefix on exit” means the router gives that prefix back to the ISP when disconnecting or whatever.

I also decided to use the Google DNS IPv6 servers.

Here’s a list of IPv6 only websites if you want to visit and feel good. :p

Check out this website to test IPv6. It also has a dual stack version that checks if your browser prefers IPv4 over IPv6 even though it may have IPv6 connectivity. Initially I was using this test site. The test succeeded there but I got the following error: “Your browser has real working IPv6 address – but is avoiding using it. We’re concerned about this.”. Turns out Chrome and Firefox start an internal counter when a site has an IPv6 and IPv4 address and if the IPv4 address responds faster then they prefer the IPv4 version. Crazy huh! In Firefox I found these two options in about:config and that seemed to fix this – network.http.fast-fallback-to-IPv4 (set this to false) and network.notify.IPv6 (set to true – I am not sure this setting matters for my scenario but I changed it anyways).

Here’s Comcast’s version of SpeedTest over IPv6.

Back to my router settings. I decided to go with “Stateful” auto configuration for the IPv6 LAN and set an appropriate range. With IPv6 you can have the router dole out IPv6 addresses to clients (in the prefix it has) or you have have clients auto configure their IPv6 address by asking the router for the prefix information but creating their own address based on that. The former is “Stateful”, the latter is “Stateless”. I decided to go with “Stateful” (though I did play around with “Stateless” too). Also, leave the “Router Advertisements” section Enabled.

That’s pretty much it.

In my case I ended up wasting about an hour after this as I noticed that my Windows 10 laptop would work on IPv6 for a while and then stop working. It wasn’t able to ping the router either. After a lot of trial and error and fooling around I realized that it’s because a long time ago I had disabled a lot of firewall rules on my Windows 10 laptop and in the process dis-allowed my IPv6 rules that were enabled by default. Silly of me! I changed all those to their default state and now the laptop works fine without an issue.

Before moving on – double check that the IPv6 firewall on your router is enabled. Now that every machine in your LAN (that has an IPv6 address) is publicly accessible one has to be careful.

Notes on NLB, VMware, etc

$
0
0

Just some notes to myself so I am clear about it while reading about it. In the context of this VMware KB article – Microsoft NLB not working properly in Unicast mode.

Before I get to the article I better talk about a regular scenario. Say you have a switch and it’s got a couple of devices connected to it. A switch is a layer 2 device – meaning, it has no knowledge of IP addresses and networks etc. All devices connected to a switch are in the same network. The devices on a switch use MAC addresses to communicate with each other. Yes, the devices have IPv4 (or IPv6) addresses but how they communicate to each other is via MAC addresses.

Say Server A (IPv4 address 10.136.21.12) wants to communicate with Server B (IPv4 address 10.136.21.22). Both are connected to the same switch, hence on the same LAN. Communication between them happens in layer 2. Here the machines identify each other via MAC addresses, so first Server A checks whether it knows the MAC address of Server B. If it knows (usually coz Server A has communicated with Server B recently and the MAC address is cached in its ARP table) then there’s nothing to do; but if it does not, then Server A finds the MAC address via something called ARP (Address Resolution Protocol). The way this works is that Server A broadcasts to the whole network that it wants the MAC address of the machine with IPv4 address 10.136.21.22 (the address of Server B). This message goes to the switch, the switch sends it to all the devices connected to it, Server B replies with its MAC address and that is sent to Server A. The two now communicate – I’ll come to that in a moment.

When it’s communication from devices in a different network to Server A or Server B, the idea is similar except that you have a router connected to the switch. The router receives traffic for a device on this network – it knows the IPv4 address – so it finds the MAC address similar to above and passes it to that device. Simple.

Now, how does the switch know which port a particular device is connected to. Say the switch gets traffic addresses to MAC address 00:eb:24:b2:05:ac – how does the switch know which port that is on? Here’s how that happens –

  • First the switch checks if it already has this information cached. Switches have a table called the CAM (Content Addressable Memory) table which holds this cached info.
  • Assuming the CAM table doesn’t have this info the switch will send the frame (containing the packets for the destination device) to all ports. Note, this is not like ARP where a question is sent asking for the device to respond; instead the frame is simply sent to all ports. It is broadcast to the whole network.
  • When a switch receives frames from a port it notes the source MAC address and port and that’s how it keeps the CAM table up to date. Thus when Server A sends data to Server B, the MAC address and switch port of Server A are stored in the switch’s CAM table.  This entry is only stored for a brief period.

Now let’s talk about NLB (Network Load Balancing).

Consider two machines – 10.136.21.11 with MAC address 00:eb:24:b2:05:ac and 10.136.21.12 with MAC address 00:eb:24:b2:05:ad. NLB is a form of load balancing wherein you create a Virtual IP (VIP) such as 10.136.21.10 such that any traffic to 10.136.21.10 is sent to either of 10.136.21.11 or 10.136.21.12. Thus you have the traffic being load balanced between the two machines; and not only that if any one of the machines go down, nothing is affected because the other machine can continue handling the traffic.

But now we have a problem. If we want a VIP 10.136.21.10 that should send traffic to either host, how will this work when it comes to MAC addresses? That depends on the type of NLB. There’s two sorts – Unicast and Multicast.

In Unicast the NIC that is used for clustering on each server has its MAC address changed to a new Unicast MAC address that’s the same for all hosts. Thus for example, the NIC that holds the NLB IP address 10.136.21.10 in the scenario above will have its MAC address changed from 00:eb:24:b2:05:ac and 00:eb:24:b2:05:ad respectively to (say) 00:eb:24:b2:05:af. Note that the MAC address is a Unicast MAC (which basically means the MAC address looks like a regular MAC address, such as that assigned to a single machine). Since this is a Unicast MAC address, and by definition it can only be assigned to one machine/ switch port, the NLB driver on each machines cheats a bit and changes the source MAC address address to whatever the original NIC MAC address was. That is to say –

  • Server IP 10.136.21.11
    • Has MAC address 00:eb:24:b2:05:ac
    • Which is changed to a MAC address of 00:eb:24:b2:05:af as part of the Unicast IP/ enabling NLB
    • However when traffic is sent out from this machine the MAC address is changed back to 00:eb:24:b2:05:ac
  • Same for Server 10.136.21.12

Why does this happen? This is because –

  • When a device wants to send data to the VIP address, it will try find the MAC address using ARP. That is, it sends a broadcast over the network asking for the device with this IP address to respond. Since both servers now have the same MAC address for their NLB NIC either server will respond with this common MAC address.
  • Now the switch receives frames for this MAC address. The switch does not have this in its CAM table so it will broadcast the frame to all ports – reaching either of the servers.
  • But why does outgoing traffic from either server change the MAC address of outgoing traffic? That’s because if outgoing frames have the common MAC address, then the switch will associate this common MAC address with that port – resulting in all future traffic to the common MAC address only going to one of the servers. By changing the outgoing frame MAC address back to the server’s original MAC address, the switch never gets to store the common MAC address in its CAM table and all frames for the common MAC address are always broadcast.

In the context of VMware what this means is that (a) the port group to which the NLB NICs connect to must allow changes to the MAC address and allow forged transmits; and (b) when a VM is powered on the port group by default notifies the physical switch of the VMs MAC address, since we want to avoid this because this will expose the cluster MAC address to the switch this notification too must be disabled. Without these changes NLB will not work in Unicast mode with VMware.

(This is a good post to read more about NLB).

Apart from Unicast NLB there’s also Multicast NLB. In this form the NLB NIC’s MAC address is not changed. Instead, a new Multicast MAC address is assigned to the NLB NIC. This is in addition to the regular MAC address of the NIC. The advantage of this method is that since each host retains its existing MAC address the communication between hosts is unaffected. However, since the new MAC address is a Multicast MAC address – and switches by default are set to ignore such address – some changes need to be done on the switch side to get Multicast NLB working.

One thing to keep in mind is that it’s important to add a default gateway address to your NLB NIC. At work, for instance, the NLB IPv4 address was reachable within the network but from across networks it wasn’t. Turns out that’s coz Windows 2008 onwards have a strong host behavior – traffic coming in via one NIC does not go out via a different NIC, even if both are in the same subnet and the second NIC has a default gateway set. In our case I added the same default gateway to the NLB NIC too and it was then reachable across networks. 

Unable to login to vSphere because the admin@system-domain password cannot be reset

$
0
0

vSphere 5.1 has admin@system-domain as the default admin account. vSphere 5.5 changes that to administrator@vsphere.local. However, if you upgrade from 5.1 to 5.5 the default admin account remains admin@system-domain. Which is fine and dandy until the password for this account expires. Then you are unable to reset or login! See below. :)

Trying to login as usual

1 - login

Password has expired, needs a reset

2 - reset

Reset fails though coz you can only reset for the vsphere.local domain

3 - reset fails

Missed out on taking a screenshot but if you were to try and login with administrator@vsphere.local instead you get an error that the credentials are invalid (because that account doesn’t exist!). So you are stuck!

What do you do?

Solution is to reset the admin password

When you do this vSphere automatically creates the administrator@vsphere.local account. Follow the steps in this KB article.

4 - reset password

Now you can login with administrator@vsphere.local and the generated password.


Power cycle/ Reset an HP blade server

$
0
0

Was getting the following error on one of our servers. It’s from ESXi. None of the NICs were working for the server (the NICs seemed to be working, just that the driver wasn’t loading). 

error

Power cycle required. 

I switched off and switched on the server but that didn’t help. Turns out that doesn’t actually power cycle the server (because the server still has power – doh!). What you need to do is do something called an e-fuse reset. This power cycles the blade. You have to do this by opening an SSH session to the Onboard Administrator, finding the bay number of the blade you want to power cycle, and typing the command reset server <bay number>

Good to know!

Note: The command does not appear when you type help, but it’s there:

reset server

Invalid Arguments

RESET SERVER { <bay number> }: Reset the server bay by momentarily removing all
power.  If a double dense blade is present, a single side cannot be reset.  Only
the entire server bay can be reset.

Also, to get a list of your bays and servers use the show server list command. To do the same for interconnects use the show interconnect list command.

[Aside] Various Azure links

$
0
0

My blog posting has taken a turn for the worse. Mainly coz I have been out of country and since returning I am busy reading up on Azure monitoring.

Anyways, some quick links to tabs I want to close now but which will be useful for me later –

  • A funny thing with Azure monitoring (OMS/ Log Analytics) is that it can’t just do simple WMI queries against your VMs to check if a service is running. Crazy, right! So you have to resort to tricks like monitor the event logs to see any status messages. Came across this blog post with a neat idea of using performance counters. I came across that in turn from this blog post that has a different way of using the event logs.
  • We use load balancers in Azure and I was thinking I could tap into their monitoring signals (from the health probes) to know if a particular server/ service is up or down. In a way it doesn’t matter if a particular server/ service is down coz there won’t be a user impact coz of the load balancer, so what I am really interested in knowing is whether a particular monitored entity (from the load balancer point of view) is down or not. But turns out the basic load balancer cannot log monitoring signals if it is for internal use only (i.e. doesn’t have a public IP). You either need to assign it a public IP or use the newer standard load balancer.
  • Using OMS to monitor and send alert for BSOD.
  • Using OMS to track shutdown events.
  • A bit dated, but using OMS to monitor agent health (has some queries in the older query language).
  • A useful list of log analytics query syntax (it’s a translation from old to new style queries actually but I found it a good reference)

Now for some non-Azure stuff which I am too lazy to put in a separate blog post:

  • A blog post on the difference between application consistent and crash consistent backups.
  • At work we noticed that ADFS seemed to break for our Windows 10 machines. I am not too clear on the details as it seemed to break with just one application (ZScaler). By way of fixing it we came across this forum post which detailed the same symptoms as us and the fix suggested there (Set-ADFSProperties -IgnoreTokenBinding $True) did the trick for us. So what is this token binding thing?
    • Token Binding seems to be like cookies for HTTPS. I found this presentation to be a good explanation of it. Basically token binding binds your security token (like cookies or ADFS tokens) to the TLS session you have with a server, such that if anyone were to get hold of your cookie and try to use it in another session it will fail. Your tokens are bound to that TLS session only. I also found this medium post to be a good techie explanation of it (but I didn’t read it properly*). 
    • It seems to be enabled on the client side from Windows 10 1511 and upwards.
    • I saw the same recommendation in these Microsoft Docs on setting up Azure stack.

Some excerpts from the medium post (but please go and read the full one to get a proper understanding). The excerpt is mostly for my reference:

Most of the OAuth 2.0 deployments do rely upon bearer tokens. A bearer token is like ‘cash’. If I steal 10 bucks from you, I can use it at a Starbucks to buy a cup of coffee — no questions asked. I do not want to prove that I own the ten dollar note.

OAuth 2.0 recommends using TLS (Transport Layer Security) for all the interactions between the client, authorization server and resource server. This makes the OAuth 2.0 model quite simple with no complex cryptography involved — but at the same time it carries all the risks associated with a bearer token. There is no second level of defense.

OAuth 2.0 token binding proposal cryptographically binds security tokens to the TLS layer, preventing token export and replay attacks. It relies on TLS — but since it binds the tokens to the TLS connection itself, anyone who steals a token cannot use it over a different channel.

Lastly, I came across this awesome blog post (which too I didn’t read properly* – sorry to myself!) but I liked a lot so here’s a link to my future self – principles of token validation.

 

* I didn’t read these posts properly coz I was in a “troubleshooting mode” trying to find out why ADFS broke with token binding. If I took more time to read them I know I’d get side tracked. I still don’t know why ADFS broke, but I have an idea.

Creating an OMS tile for computer online/ offline status

$
0
0

This is by no means a big deal, nor am I trying to take credit. But it is something I setup a few days ago and I was pleased to see it in action today, so wanted to post it somewhere. :)

So as I said earlier I have been reading up on Azure monitoring these past few days. I needed something to aim towards and this was one of the things I tried out.

When you install the “Agent Health” solution it gives a tile in the OMS home page that shows the status of all the agents – basically their offline/ online status based on whether an agent is responsive or not.

The problem with this tile is that it only looks for servers that are offline for more than 24 hours! So it is pretty useless if a server went down say 10 mins ago – I can keep staring at the tile for the whole day and that server will not pop up.

I looked at creating something of my own and this is what I came up with –

If you click on the tile it shows a list of servers with the offline ones on top. :)

I removed the computer names in the screenshot that’s why it is blank.

So how did I create this?

I went into View Designer and added the “Donut” as my overview tile. 

Changed the name to “Agent Status”. Left description blank for now. And filled the following for the query:

Heartbeat 
| summarize LastSeen = max(TimeGenerated) by Computer  
| extend Status = iff(LastSeen < ago(15m),"Offline","Online") 
| summarize Count = count() by Status 
| order by Count desc

Here’s what this query does. First it collects all the Heartbeat events. These are piped to a summarize operator. This summarizes the events by Computer name (which is an attribute of each event) and for each computer it computes a new attribute called LastSeen which is the maximum TimeGenerated timestamp of all its events. (You need to summarize to do this. The concept feels a bit alien to me and I am still getting my head around it. But I am getting there).

This summary is then piped to an extend operator which adds a new attribute called Status. (BTW attributes can also be thought of as columns in a table. So each event is a row with the attributes corresponding to columns). This new attribute is set to Offline or Online depending on whether the previously computed LastSeen was less than 15 mins or not.

The output of this is sent to another summarize who now summarizes it by Status with a count of the number of events of each time.

And this output is piped to an order to sort it in descending. (I don’t need it for this overview tile but I use the same query later on too so wanted to keep it consistent).

All good? Now scroll down and change the colors if you want to. I went with Color1 = #008272 (a dark green) and Color 2 = #ba141a (a dark red).

That’s it, do an apply and you will see the donut change to reflect the result of the query.

Now for the view dashboard – which is what you get when someone clicks the donut!

I went with a “Donut & list” for this one. In the General section I changed Group Title to “Agent Status”, in the Header section I changed Title to “Status”, and in the Donut section I pasted the same query as above. Also changed the colors to match the ones above. Basically the donut part is same as before because you want to see the same output. It’s the list where we make some changes.

In the List section I put the following query:

Heartbeat 
| summarize LastSeen = max(TimeGenerated) by Computer 
| extend Status = iff(LastSeen < ago(15m),"Offline","Online")
| sort by bin(LastSeen,1min) asc

Not much of a difference from before, except that I don’t do any second summarizing. Instead I sort it by the LastSeen attribute after rounding it up to 1 min. This way the oldest heartbeat event comes up on top – i.e. the server that has been offline for the longest. In the Computer Titles section I changed the Name to “Computer” and Value to “Last Seen”. I think there is some way to add a heading for the Offline/Online column too but I couldn’t figure it out. Also, the Thresholds feature seemed cool – would be nice if I could color the offline ones red for instance, but I couldn’t figure that out either.

Lastly I changed the click-through navigation action to be “Log Search” and put the following:

Heartbeat 
| summarize LastCall = max(TimeGenerated) by Computer  
| where LastCall < ago(15m)

This just gives a list of computers that have been offline for more than 15 mins. I did this because the default action tries to search on my Status attribute and fails; so thought it’s best I put something of my own.

And that’s it really! Like I said no biggie, but it’s my first OMS tile and so I am proud. :)

ps. This blog post brought to you by the Tamil version of the song “Move Your Body” from the Bollywood movie “Johnny Gaddar” which for some reason has been playing in my head ever since I got home today. Which is funny coz that movie is heavily inspired by the books of James Hadley Chase and I was searching for his books at Waterstones when I was in London a few weeks ago (and also yesterday online).

Service SIDs etc.

$
0
0

Just so I don’t forget. 

The SCOM Agent on a server is called “Microsoft Monitoring Agent”. The short service name is “HealthService” and is set to run as Local System (NT Authority\System). Although not used by default, this service also has a virtual account created automatically by Windows called “NT SERVICE\HealthService” (this was a change introduced in Server 2008). 

As a refresher to myself and any others – this is a virtual account. – i.e. a local account managed by Windows and one which we don’t have much control over (like change the password etc). All services, even though they may be set to run under Local System can also run in a restricted mode under an automatically created virtual account “NT Service\<ServiceName>”. As with Local System, when a service running under such an account accesses a remote system it does so using the credentials of the machine it is running on – i.e. “<DomainName>\<ComputerName>$“.

Since these virtual accounts correspond to a service, and each virtual account has a unique SID, such virtual accounts are also called service SIDs. 

Although all services have a virtual account, it is not used by default. To see whether a virtual account is used or not one can use the sc qsidtype command. This queries the type of the SID of the virtual account. 

C:\>sc qsidtype HealthService
[SC] QueryServiceConfig2 SUCCESS

SERVICE_NAME: HealthService
SERVICE_SID_TYPE:  NONE

A type of NONE as in the above case means this virtual account is not used by the service. If we want a service to use its virtual account we must change this type to “Unrestricted” (or one could set it to “Restricted” too which creates a “write restricted” token – see this and this post to understand what that means). 

The sc sidtype command can be used to change this. 

C:\Windows\system32>sc sidtype HealthService unrestricted
[SC] ChangeServiceConfig2 SUCCESS

A service SID is of the form S-1-5-80-{SHA1 hash of short service name}. You can find this via the sc showsid command too:

C:\>sc showsid HealthService

NAME: HealthService
SERVICE SID: S-1-5-80-3696737894-3623014651-202832235-645492566-13622391
STATUS: Active

Note the status “Active”? That’s because I ran the above command after changing the SID type to “Unrestricted”. Before that, when the service SID wasn’t being used, the status was “Inactive”. 

So why am I reading about service SIDs now? :) It’s because I am playing with SCOM and as part of adding one of our SQL servers to it for monitoring I started getting alerts like these:

Cannot connect to database 'model'
Error Number: -2147467259
Description: [Microsoft][SQL Server Native Client 11.0][SQL Server]The server principal "NT AUTHORITY\SYSTEM" is not able to access the database "model" under the current security context.
Instance: MSSQLSERVER

I figured this would be because the account under which the Monitoring Agent runs has no permissions to the SQL databases, so I looked at RunAs accounts for SQL and came across this blog post. Apparently the in thing nowadays is to change the Monitoring Agent to use a service SID and give that service SID access to the databases. Neat, eh! :)

I did the first step above – changing the SID type to “Unrestricted” so the Monitoring Agent uses that service SID. So next step is to give it access to the databases. This can be done by executing the following in SQL Management Studio after connecting to the SQL server in question:

USE [master]
GO
/****** Add a login in SQL Server for the service SID of System Center Advisor HealthService ******/
CREATE LOGIN [NT SERVICE\HealthService] FROM WINDOWS WITH DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english]
GO
/****** Add the HealthService Service SID login to the sysadmin server role ******/
ALTER SERVER ROLE [sysadmin] ADD MEMBER [NT SERVICE\HealthService]
GO

The comments explain what it does. And yes, it gives the “NT Service\HealthService” service SID admin rights to the server. I got this code snippet from this KB article but the original blog post I was reading has a version which gives minimal rights (it has some other cool goodies too, like a task to create this automatically). I was ok giving this service SID admin rights. 

DNS SRV records used by AD

$
0
0

Just thought I’d put these here for my own easy reference. I keep forgetting these records and when there’s an issue I end up Googling and trying to find them! These are DNS records you can query to see if clients are able to lookup the PDC, GC, KDC, and DC of the domain you specify via DNS. If this is broken nothing else will work. :)

PDC _ldap._tcp.pdc._msdcs.<DnsDomainName>
GC _ldap._tcp.gc._msdcs.<DnsDomainName>
KDC _kerberos._tcp.dc._msdcs.<DnsDomainName>
DC _ldap._tcp.dc._msdcs.<DnsDomainName>

You would look this up using nslookup -type=SRV <Record>.

As a refresher, SRV records are of the form _Service._Proto.Name TTL Class SRV Priority Weight Port Target. The _Service._Proto.Name is what we are looking up above, just that our name space is _msdcs.<DnsDomainName>.

Viewing all 75 articles
Browse latest View live