More on OS X Dynamic Linking

My recent blog post on using a third party library within an Objective-C project had one fatal flaw – it didn’t actually work!

The problem was that even though I had copied the PROJ.4 library into my project, linked against that version of the library, and copied it into my application, the application was still looking for the library in it’s original location – /opt/local/lib.

After much Googling, and a fair amount of head scratching it turns out that dynamic libraries on OS X have an interesting feature where they hard code the path to the library at link time.  That doesn’t sound like a problem, however the path the application hard codes for the library isn’t set by one of the many Build Settings within your Xcode project rather it’s hard coded into the library when the library is built!

Helpfully Apple provide some command line tools to resolve this issue.  The first is otool which we can use to inspect this hard coded path/name in the PROJ.4 library:

$ otool -L libproj.0.dylib 
libproj.0.dylib:
	/opt/local/lib/libproj.0.dylib (compatibility version 8.0.0, current version 8.0.0)
	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1197.1.1)

The first line of output gives us the name, which is also the path the linker will use, for the library.  This means that whenever we link an application against this library it will always look for it as /opt/local/lib/libproj.0.dylib no matter where we actually install it.

The other tool Apple provides, install_name_tool, allows us to modify the name/path of the library:

$ install_name_tool -id "@executable_path/../Frameworks/libproj.0.dylib" libproj.0.dylib

What this command does is replace the existing name/path with @executable_path/../Frameworks/libproj.0.dylib. The @executable_path keyword tells the system that it should look for the library relative to the executable, in this case one folder up, then in the Frameworks folder.

We can check that the change has taken:

$ otool -L libproj.0.dylib 
libproj.0.dylib:
	@executable_path/../Frameworks/libproj.0.dylib (compatibility version 8.0.0, current version 8.0.0)
	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1197.1.1)

With the library updated cleaning the project and then rebuilding creates an application that works as expected.  I’ve updated the project on GitHub to include this fix and some clean up of the coordinate conversion code.

So the slightly updated sequence for adding a third party library to an application is:

  1. Copy the library into the project.
  2. Switch to the command line and update the library path/name using install_name_tool.
  3. Add a Copy Files build phase so that the library is copied into the application bundle.
  4. Update the Header Search Path in the build settings so that the header files for the library can be found by Xcode.

Your application should now work properly on any OS X system.  For more detail about linking and @executable_path take a look at this blog post by Mike Ash, his other blogs posts are good as well!

Using the PROJ.4 Library in an OS X Objective-C Application

I’ve been tinkering with programming in Objective-C to create both Mac and iOS applications and one of the areas I kept meaning to look into was how to use libraries written in C for Unix or Linux.  This post describes how I created a simple application for OS X Mavericks that uses the PROJ.4 library to convert coordinates from WGS84 to OSGB.

The first step was to get the PROJ.4 library.  There are a number of systems that provide prebuilt open source software for the Mac, I chose to go with MacPorts.  To install the core MacPorts system I followed the simple instructions.

Once MacPorts was installed I checked to see whether the system was working by listing information about the PROJ.4 library:

iMac:~ kms$ port info proj
proj @4.8.0 (gis)
Variants:             universal

Description:          PROJ.4 is a library for converting data between
                      cartographic projections.
Homepage:             http://trac.osgeo.org/proj/

Platforms:            darwin
License:              MIT
Maintainers:          seanasy@gmail.com, openmaintainer@macports.org

With the system working I could then install PROJ.4:

iMac:~ kms$ sudo port install proj
--->  Fetching archive for proj
--->  Attempting to fetch proj-4.8.0_0.darwin_13.x86_64.tbz2 from http://mse.uk.packages.macports.org/sites/packages.macports.org/proj
--->  Attempting to fetch proj-4.8.0_0.darwin_13.x86_64.tbz2.rmd160 from http://mse.uk.packages.macports.org/sites/packages.macports.org/proj
--->  Installing proj @4.8.0_0
--->  Activating proj @4.8.0_0
--->  Cleaning proj
--->  Updating database of binaries: 100.0%
--->  Scanning binaries for linking errors: 100.0%
--->  No broken files found.

And looking in /opt/local/bin and /opt/local/lib I can see PROJ.4 has been installed:

iMac:~ kms$ ls /opt/local/bin
cs2cs		invgeod		port		portmirror
daemondo	invproj		portf		proj
geod		nad2bin		portindex
iMac:~ kms$ ls /opt/local/lib
libproj.0.dylib	libproj.a	libproj.dylib	pkgconfig

Now the installation of PROJ.4 was complete I turned my attention to Xcode 5.  In Xcode I created a new OS X project of type Application/Cocoa Application.  I didn’t change any of the values during project creation, if you’re following along just make sure you’ve not selected a Document Based application and you’ve not enabled Core Data.  Once I had the project created there were four steps I had undertake: add the PROJ.4 dynamic library file to my project; configure my project to link my application to the library during build; configure the project to copy the dynamic library file into the Frameworks folder of my app bundle; and configure my project so that it can see the PROJ.4 header file proj_api.h.

To add the dynamic library file to the project control-click on the Frameworks group in the Project Navigator and select Add Files to… from the popup menu.  Navigate to /opt/local/lib and select ;libproj.dylib, make sure that Copy Items into destinations group’s folder is selected and add to your project target (you could add it to your test target as well if you want).  Then click Add.

Step 2, configure the project to link to the PROJ.4 library, isn’t necessary with Xcode 5.  The step of adding the library to the project configures the linking automagically.  For completeness, linking is configured by selecting the project in the Project Navigator, then selecting the target for which you want to configure the build.  Click on Build Phases and you should see Link Binary with Libraries, clicking the disclosure triangle you should see the PROJ.4 library along with the Cocoa Framework.  If you don’t see the PROJ.4 library you can add it by clicking on the + button.

The next configuration step was to add a Copy File phase to the build so that the PROJ.4 library is included in my App Bundle.  In the Build Phases page of the project target I clicked the + sign on the top left, just below General, to add a New Copy Files Build Phase.  Clicking on the disclosure triangle of the Copy Files phase I set the Destination to Frameworks, left Subpath blank, and Copy only when installing unselected.  Finally I clicked the + to add libproj.0.dylib to the copy files build phase.

The last task was to add /opt/local/include as a header search path to the projects Build Settings so that Xcode would be able to see the PROJ.4 header file, proj_api.h. In Build Settings I made sure that All was selected rather than Basic and I scrolled down to the Search Paths section.  In this section I expanded the Header Search paths and added /opt/local/include to both the Debug and Release build schemes.  With this done I could switch to AppDelegate.m and add the line #include "proj_api.h" with no errors (and autocomplete works).

With all the configuration done I could build and run the project with no errors, not that the application did anything at this stage.

Rather than explain how to create a simple Cocoa application with a couple of NSTextFields for the WGS84 latitude and longitude; an NSButton to trigger the conversion; and another NSTextField to display the OSGB coordinates I uploaded my project to GitHub.

The code I added is limited to two functions in AppDelegate.m:

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{

    if (!(pjWGS84 = pj_init_plus("+proj=longlat +ellps=WGS84 +no_defs"))) {
        NSLog(@"Could not initialise WGS84");
        exit(1);
    }
    if (!(pjOSGB = pj_init_plus("+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +datum=OSGB36 +units=m +no_defs"))) {
        NSLog(@"Could not initialise OSGB");
        exit(1);
    }
}

And:

- (IBAction)convertToOSGB:(id)sender {
    double x, y;
    int e;

    x = DEG_TO_RAD * self.longitude.doubleValue;
    y = DEG_TO_RAD * self.latitude.doubleValue;

    if ((e = pj_transform(pjWGS84, pjOSGB, 1, 0, &x, &y, NULL)) != 0) {
        [self.osgbRef setStringValue:@"Transform Error"];
    } else {
        [self.osgbRef setStringValue:[NSString stringWithFormat:@"%d %d", @(x).intValue, @(y).intValue]];
    }
}

Installing OpenStack Keystone on Fedora

I have been playing a bit with cloud services, in particular Amazon Web Services but I recently wanted to install OpenStack to see what all the hype was about and to better understand the underlying components and technologies.  It’s possible to do a full OpenStack install on a single server or virtual machine running Fedora using the RDO instructions, however I wanted to do the build by hand.

I started with a minimal install of Fedora using the standard file system layout, I ran yum update and rebooted.  Once the system was back up I installed the RDO relase RPM as per the RDO quickstart instructions:

sudo yum install http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm

This gives access to pre-built RPMs for all of the OpenStack Havana components.  RDO makes use of a number of other components to provide a DevOps style approach to installation.  I didn’t want this extra functionality so I edited /etc/yum.repos.d/foreman.repo and /etc/yum.repos.d/puppetlabs.repo to disable both of those repositories.

OpenStack supports a variety of database backends, but the simplest and best documented seems to be MySQL.  Fedora has switched to using the MariaDB fork of MySQL so that’s what I installed, along with the MySQL module for Python:

sudo yum install mariadb mariadb-server MySQL-python
sudo systemctl start mariadb.service
sudo systemctl enable mysqld.service
sudo mysql_secure_installation

Note that only the database server needs mariadb-server package.  Next I installed the OpenStack utils package:

sudo yum install openstack-utils

As well as the database the other piece of infrastructure that OpenStack needs is a messaging service that provides AMQP.  The two main implementations of this are RabbitMQ or Qpid.  I’ve chosen to use Qpid:

sudo yum install qpid-cpp-server memcached

For simplicity I turned off authentication in Qpid by adding auth=no to /etc/qpidd.conf, you probably wouldn’t do this in a production deployment!  Start and enable qpidd:

sudo systemctl start qpidd.service
sudo systemctl enable qpidd.service

Keystone is the identity component of OpenStack, similar to IAM in AWS terms.  Install the Keystone packages:

yum install openstack-keystone python-keystoneclient

Keystone needs to be configured to use the database we installed.  The openstack-config command allows use to set values in various config files without firing up vi.

sudo openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:keystone_db_pass@controller/keystone

The arguments to this command are the --set option indicating you want to set a value; the file that contains the value we want to set; the section within the file (if you edit the file with vi you can search for [sql]); the parameter we want to set: connection; and the value for the parameter.  In this case we’re setting the SQL Alchemy connection string which is an RFC1738 URL.

Now that the database connection is configured it can be initialised.  Note that you need to pass the same password value (in my case “keystone_db_pass“) as you configured in the SQL Alchemy connection string/URL:

sudo openstack-db --init --service keystone --password keystone_db_pass

You’ll be prompted for the database root password you set when you ran the mysql_secure_installation command.

Setup the main admin user password, the first command creates a random password and stores it in a shell environment variable so you can use it in subsequent commands:

ADMIN_TOKEN=$(openssl rand -hex 10)
sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
sudo keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

The second command initialises the certificates that Keystone uses to create the cryptographically strong authentication tokens that we will use later when accessing the service via the command line or API.  There’s fuller discussion in the OpenStack Keystone documentation.

Now we can start the service:

sudo chown -R keystone:keystone /etc/keystone/ /var/log/keystone/keystone.log
sudo systemctl start openstack-keystone.service
sudo systemctl enable openstack-keystone.service

We need to set up a couple of environment variables so that we can use the command line tools.  OS_SERVICE_TOKEN is the password we created with the previous openssl command.  OS_SERVICE_ENDPOINT is the URL for the Keystone API, I’m using the IP address 10.0.0.29, but you should use the appropriate hostname or IP address for your environment:

export OS_SERVICE_TOKEN="763237339bc02dd92bfb"
export OS_SERVICE_ENDPOINT="http://10.0.0.29:35357/v2.0"

With all of that done we can now start using the keystone command to actual create tenants, users, and services:

keystone tenant-create --name=admin --description="Admin Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Admin Tenant           |
|   enabled   |               True               |
|      id     | 4b7e1355bb4d4afb960da724a9dfa0fc |
|     name    |              admin               |
+-------------+----------------------------------+
keystone tenant-create --name=service --description="Service Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Service Tenant          |
|   enabled   |               True               |
|      id     | c2e553ac9d164c74aff6d1a130f0f099 |
|     name    |             service              |
+-------------+----------------------------------+

These two commands create our first two tenants.  In OpenStack tenants can be thought of as groups that hold users and other resources that clouds provide.  For example, in a public cloud a tenant might represent a customer of that cloud service or in a private cloud a department or business line.  The admin tenant will hold the admin users for the cloud and the service tenant will hold the services that the cloud provides.  The names aren’t special you could call them anything.

We can also use the keystone command to list the tenants:

keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| 4b7e1355bb4d4afb960da724a9dfa0fc |  admin  |   True  |
| c2e553ac9d164c74aff6d1a130f0f099 | service |   True  |
+----------------------------------+---------+---------+

The next step is to create an admin user, you should give this user a better password than I’ve chosen here!

keystone user-create --name=admin --pass=admin --email=admin@example.org
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |        admin@example.org         |
| enabled  |               True               |
|    id    | 22f1020799b7425cabbf22837934d510 |
|   name   |              admin               |
+----------+----------------------------------+

Privileges in OpenStack are assigned to users through roles, the privileges are associated with the role and the role is then associated with a user.  We’ve got the admin user so the next step is to create the admin role.  In this case the role name is important as it needs to match the role name in the policy.json file that controls rights and access.

keystone role-create --name=admin
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | 676f70baed8e430799138acf75a3f8b3 |
|   name   |              admin               |
+----------+----------------------------------+

The final step is to tie the tenant, user, and role together:

keystone user-role-add --user=admin --tenant=admin --role=admin

To summarise, creating a user consists of four steps:

  1. If necessary create a new tenant – keystone tenant-create.
  2. Create the new user – keystone user-create.
  3. If necessary create the new role (remember the role name must match that in the policy.json file) – keystone role-create.
  4. Tie the tenant, role, and user together – keystone user-role-add.

In OpenStack all of the cloud resources are presented as Services, this includes Keystone itself.  Our next step is to create the Keystone service and then make it available:

keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |    Keystone Identity Service     |
|      id     | dbb075345d404db5a64e33918a8e96f4 |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+

Having created the service we need to create endpoints for consumers to access the service.  Note that there are three different endpoints, this is to support the common deployment scenario where the server hosting Keystone has three network interface cards – one for public access (i.e. users of the cloud), one for internal access (i.e. other services within the cloud), and one for admin access.  In this test deployment they’re all on the same interface card.  The --service-id parameter is the UUID that was returned as the id parameter in the keystone service-create command above.

keystone endpoint-create  --service-id=dbb075345d404db5a64e33918a8e96f4 --publicurl=http://10.0.0.29:5000/v2.0 --internalurl=http://10.0.0.29:5000/v2.0 --adminurl=http://10.0.0.29:35357/v2.0
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |   http://10.0.0.29:35357/v2.0    |
|      id     | 35bfb42f44194228a66ec8a70b44493e |
| internalurl |    http://10.0.0.29:5000/v2.0    |
|  publicurl  |    http://10.0.0.29:5000/v2.0    |
|    region   |            regionOne             |
|  service_id | dbb075345d404db5a64e33918a8e96f4 |
+-------------+----------------------------------+

We can now verify that the tenant, user, and service we’ve created are all working.  To do this we first need to clear the credentials and service endpoint we’ve been using so far:

unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

We can now use the keystone command with the username, password, and service endpoint that we just created:

keystone --os-username=admin  --os-password=admin --os-auth-url=http://10.0.0.29:35357/v2.0 token-get

We can do the same thing, but additionally specifying the tenant name:

keystone --os-username=admin  --os-password=admin --os-tenant-name=admin --os-auth-url=http://10.0.0.29:35357/v2.0 token-get

OpenStack authentication works on the principle that you supply valid credentials to a service endpoint and in return you get a token which you present to the service when you make subsequent requests.  The previous two commands are using the token-get parameter to request a token.

It can get tedious to have to type in the username, password, tenant name, and endpoint parameters for each command so OpenStack allows you to set these are environment variables:

export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://10.0.0.29:35357/v2.0

Which then allows you to shorten commands:

keystone user-list
+----------------------------------+-------+---------+-------------------+
|                id                |  name | enabled |       email       |
+----------------------------------+-------+---------+-------------------+
| 22f1020799b7425cabbf22837934d510 | admin |   True  | admin@example.org |
+----------------------------------+-------+---------+-------------------+

Note that putting passwords, especially admin ones, into environment variables probably isn’t best practice!

Now that command line access is working we can do exactly the same things using the Keystone API.  In the following example we make an HTTP POST request to the tokens URL passing our credentials as a JSON document in the request payload.

In response we get a token that we can use in further API calls, a service catalog detailing the service endpoints, and information about our user, role, and tenant.

curl -k -X 'POST' -v http://10.0.0.29:35357/v2.0/tokens -d '{"auth":{"passwordCredentials":{"username": "admin", "password":"admin"}, "tenantId":"4b7e1355bb4d4afb960da724a9dfa0fc"}}' -H 'Content-type: application/json'
* About to connect() to 10.0.0.29 port 35357 (#0)
*   Trying 10.0.0.29...
* Connected to 10.0.0.29 (10.0.0.29) port 35357 (#0)
> POST /v2.0/tokens HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.0.0.29:35357
> Accept: */*
> Content-type: application/json
> Content-Length: 121
> 
* upload completely sent off: 121 out of 121 bytes
< HTTP/1.1 200 OK
< Vary: X-Auth-Token
< Content-Type: application/json
< Content-Length: 2347
< Date: Sun, 03 Nov 2013 17:22:39 GMT
< 
{
  "access": {
    "token": {
      "issued_at": "2013-11-03T17:22:39.311048", 
      "expires": "2013-11-04T17:22:39Z", 
      "id": "MIIErwYJKoZIhvcNAQcCoIIEoDCCBJwCAQExCTAHBgUrDgMCGjCCAwUGCSqGSIb3DQEHAaCCAvYEggLyeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMS0wM1QxNzoyMjozOS4zMTEwNDgiLCAiZXhwaXJlcyI6ICIyMDEzLTExLTA0VDE3OjIyOjM5WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIkFkbWluIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogIjRiN2UxMzU1YmI0ZDRhZmI5NjBkYTcyNGE5ZGZhMGZjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4wLjAuMjk6MzUzNTcvdjIuMCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMC4wLjAuMjk6NTAwMC92Mi4wIiwgImlkIjogIjIxMjhiOWExMDc0OTQ3ZDU4NDI0YWQwOTJmNTM3MTdhIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMC4wLjI5OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogIjIyZjEwMjA3OTliNzQyNWNhYmJmMjI4Mzc5MzRkNTEwIiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiNjc2ZjcwYmFlZDhlNDMwNzk5MTM4YWNmNzVhM2Y4YjMiXX19fTGCAYEwggF9AgEBMFwwVzELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBVVuc2V0MQ4wDAYDVQQHDAVVbnNldDEOMAwGA1UECgwFVW5zZXQxGDAWBgNVBAMMD3d3dy5leGFtcGxlLmNvbQIBATAHBgUrDgMCGjANBgkqhkiG9w0BAQEFAASCAQCUcTFJU550veZlBYtXQos0Q24BJVbw2acBSZ2p42Ifw2itZxHRa6RpYKyPhltTE93v8zbLbNLVS+KI-+U-SP3zsTzWrrFxS2Bt7AWh2qPhPossGqmxmv3DnFZPk5bOXk3fMWMRnYydsH5hFknmhilbPX4EwJNV6qLyZvDjpg4szIc8YBVludPiy-6aGrv7eWNZUhMi7zz3b7SSYJ0gTTB7brTzmtcH946ayY33a0lx8fSlcfUWV22Ey7BWPFHzVQxzF+2Ho46uIqPDs3ohV9q5I-XSOwTvA+lWvI35VbFHnBKnhjpYGrGAjexhQyTD7InCGYejKCu6H1yedr2c0aci", 
      "tenant": {
        "description": "Admin Tenant", 
        "enabled": true, 
        "id": "4b7e1355bb4d4afb960da724a9dfa0fc", 
        "name": "admin"
      }
    }, 
    "serviceCatalog": [{
      "endpoints": [{
        "adminURL": "http://10.0.0.29:35357/v2.0",
        "region": "regionOne",
        "internalURL": "http://10.0.0.29:5000/v2.0",
        "id": "2128b9a1074947d58424ad092f53717a",
        "publicURL": "http://10.0.* Connection #0 to host 10.0.0.29 left intact
0.29:5000/v2.0"
      }],
      "endpoints_links": [],
      "type": "identity",
      "name": "keystone"
    }],
    "user": {
      "username": "admin",
      "roles_links": [],
      "id": "22f1020799b7425cabbf22837934d510",
      "roles": [{
        "name": "admin"
      }],
      "name": "admin"
    },
    "metadata": {
      "is_admin": 0,
      "roles": ["676f70baed8e430799138acf75a3f8b3"]
    }
  }
}

The next example uses the authentication toke we’ve just received to make a API call listing the extensions that are available in this OpenStack instance, note that this is an HTTP GET request so there’s no payload this time:

curl -k -D - -H "X-Auth-Token: MIIErwYJKoZIhvcNAQcCoIIEoDCCBJwCAQExCTAHBgUrDgMCGjCCAwUGCSqGSIb3DQEHAaCCAvYEggLyeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMS0wM1QxNzoyMjozOS4zMTEwNDgiLCAiZXhwaXJlcyI6ICIyMDEzLTExLTA0VDE3OjIyOjM5WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIkFkbWluIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogIjRiN2UxMzU1YmI0ZDRhZmI5NjBkYTcyNGE5ZGZhMGZjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4wLjAuMjk6MzUzNTcvdjIuMCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMC4wLjAuMjk6NTAwMC92Mi4wIiwgImlkIjogIjIxMjhiOWExMDc0OTQ3ZDU4NDI0YWQwOTJmNTM3MTdhIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMC4wLjI5OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogIjIyZjEwMjA3OTliNzQyNWNhYmJmMjI4Mzc5MzRkNTEwIiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiNjc2ZjcwYmFlZDhlNDMwNzk5MTM4YWNmNzVhM2Y4YjMiXX19fTGCAYEwggF9AgEBMFwwVzELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBVVuc2V0MQ4wDAYDVQQHDAVVbnNldDEOMAwGA1UECgwFVW5zZXQxGDAWBgNVBAMMD3d3dy5leGFtcGxlLmNvbQIBATAHBgUrDgMCGjANBgkqhkiG9w0BAQEFAASCAQCUcTFJU550veZlBYtXQos0Q24BJVbw2acBSZ2p42Ifw2itZxHRa6RpYKyPhltTE93v8zbLbNLVS+KI-+U-SP3zsTzWrrFxS2Bt7AWh2qPhPossGqmxmv3DnFZPk5bOXk3fMWMRnYydsH5hFknmhilbPX4EwJNV6qLyZvDjpg4szIc8YBVludPiy-6aGrv7eWNZUhMi7zz3b7SSYJ0gTTB7brTzmtcH946ayY33a0lx8fSlcfUWV22Ey7BWPFHzVQxzF+2Ho46uIqPDs3ohV9q5I-XSOwTvA+lWvI35VbFHnBKnhjpYGrGAjexhQyTD7InCGYejKCu6H1yedr2c0aci" -X 'GET' -v http://10.0.0.29:35357/v2.0/extensions  -H 'Content-type: application/json'

In response we get a HTTP 200 OK from the server and a JSON document that lists the available extensions:

{
  "extensions": {
    "values": [{
      "updated": "2013-07-07T12:00:0-00:00", 
      "name": "OpenStack S3 API", 
      "links": [{
        "href": "https://github.com/openstack/identity-api", 
        "type": "text/html", 
        "rel": "described by"
      }], 
      "namespace": "http://docs.openstack.org/identity/api/ext/s3tokens/v1.0", 
      "alias": "s3tokens", 
      "description": "OpenStack S3 API."
    }, {
      "updated": "2013-07-11T17:14:00-00:00", 
      "name": "OpenStack Keystone Admin", 
      "links": [{
        "href": "https://github.com/openstack/identity-api", 
        "type": "text/html", 
        "rel": "described by"
      }], 
      "namespace": "http://docs.openstack.org/identity/api/ext/OS-KSADM/v1.0", 
      "alias": "OS-KSADM", 
      "description": "OpenStack extensions to Keystone v2.0 API enabling Administrative Operations."
    }, {
      "updated": "2013-07-07T12:00:0-00:00", 
      "name": "OpenStack EC2 API", 
      "links": [{
        "href": "https://github.com/openstack/identity-api",
        "type": "text/html",
        "rel": "described by"
      }],
      "namespace": "http://docs.openstack.org/identity/api/ext/OS-EC2/v1.0", 
      "alias": "OS-EC2",
      "description": "OpenStack EC2 Credentials ba* Connection #0 to host 10.0.0.29 left intact
ckend."
    }, {
      "updated": "2013-07-23T12:00:0-00:00",
      "name": "Openstack Keystone Endpoint Filter API",
      "links": [{
        "href": "https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-ep-filter-ext.md",
        "type": "text/html",
        "rel": "described by"
      }],
      "namespace": "http://docs.openstack.org/identity/api/ext/OS-EP-FILTER/v1.0",
      "alias": "OS-EP-FILTER",
      "description": "Openstack Keystone Endpoint Filter API."
    }]
  }
}

At this point we’ve got Keystone up and running and demonstrated that we can use the service both through the command line tools and the API.  The next step is to add additional OpenStack services that will make use of Keystone.

One thing I found confusing the first time I went through a Keystone deployment was the variety of users and passwords that I needed to create, so here’s a summary:

  1. The root or admin user for the database your are using.  In my case this was the MySQL root user password set when I ran the mysql_secure_installation command.
  2. The password that Keystone will use when access it’s own database, set when configuring and initialising the Keystone database.
  3. The Keystone admin user password.  This is effectively the root password for Keystone and is stored in the /etc/keystone/keystone.conf file unencrypted.  Only to be used during initial Keystone deployment and configuration.
  4. Finally the Keystone service admin user that you should create as soon as the service is up and running.  This is the account that you’ll use to perform all ongoing admin tasks.  Best practise would be to create individual accounts for all users that will need admin privileges and assign them to the admin role.

AWS Command Line Tools for Mac OS X

Just a quick guide to get the Amazon Web Services (AWS) command line tools installed and configured on an Apple Mac running Mountain Lion.

The first task was to get PIP installed:

sudo easy_install pip

Then it’s a simple case of using PIP to to install the AWS CLI:

sudo pip install awscli

After a few minutes you should have the CLI tools installed.  The final task is to set up your credentials.  Create the file $HOME/.aws/config, it should contain something like the following:

[default]
aws_access_key = YOURKEYHERE
aws_secret_access_key = YOURSECRETACCESSKEYHERE
region = eu-west-1

You should replace these values with your own access keys and preferred region.  For extra credit, if you’re a Bash shell user, you can enable command completion:

complete -C aws_completer aws

There’s much more information on the AWS CLI available from Amazon.

More IPv6 Networking

At the end of a previous post I left myself with two systems talking to each other using IPv6 link-local addresses.  In this post I’m going to describe a couple of methods of applying wider scope IPv6 addresses.

Basic network with link local addresses

Basic network with link local addresses

The above diagram shows the basic network I am using.  Alpha and beta are identical Fedora 19 virtual machines, the only difference is their Ethernet MAC addresses, on both systems the Ethernet interface is enp0s3.  The systems were configured as described in my previous blog post.  I can ping from system to system:

[root@alpha ~]# ping6 -I enp0s3 fe80::2
PING fe80::2(fe80::2) from fe80::1 enp0s3: 56 data bytes
64 bytes from fe80::2: icmp_seq=1 ttl=64 time=0.279 ms
64 bytes from fe80::2: icmp_seq=2 ttl=64 time=0.246 ms
64 bytes from fe80::2: icmp_seq=3 ttl=64 time=0.357 ms
^C
--- fe80::2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.246/0.294/0.357/0.046 ms

[root@beta ~]# ping6 -I enp0s3 fe80::1
PING fe80::1(fe80::1) from fe80::2 enp0s3: 56 data bytes
64 bytes from fe80::1: icmp_seq=1 ttl=64 time=0.223 ms
64 bytes from fe80::1: icmp_seq=2 ttl=64 time=0.317 ms
64 bytes from fe80::1: icmp_seq=3 ttl=64 time=0.277 ms
^C
--- fe80::1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.223/0.272/0.317/0.040 ms

Note the use of the -I enp0s3 argument to ping6, this is because the addresses I am using are link-local scope only as discussed previously.  I can also connect from system to system using SSH:

[root@alpha ~]# ssh fe80::2%enp0s3
The authenticity of host 'fe80::2%enp0s3 (fe80::2%enp0s3)' can't be established.
RSA key fingerprint is 67:d7:20:6e:f2:e9:25:de:d8:4c:e6:8b:e1:71:f9:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'fe80::2%enp0s3' (RSA) to the list of known hosts.
root@fe80::2%enp0s3's password: 
Last login: Fri Sep 6 13:53:08 2013
[root@beta ~]#

One point to note, I had to use the %enp0s3 scoping suffix to the address so that SSH knew which interface to use for the connection.

Prefixes

IPv6 works on the basis of allocated prefixes, these are similar in idea to CIDR subnets in IPv4.  You would normally obtain your prefix from your ISP.  According to the IETF recommendations ISPs should be handing customers /56 prefixes.  This means that out of the 128 bit IPv6 address you would have 72 bits for use in creating your own subnets and host addresses.

Best practice is that you shouldn’t use prefixes longer that 64 bits as this starts to interfere with the automatic address allocation that I’ll discuss later.  The exception to this rule is for point-to-point links where you should use 127 bit prefixes as per RFC 6164.

Given that IPv6 addresses are designed to be global it’s not a good idea to use your own prefixes for examples like those contained in this blog as this could lead to duplicate addresses and prefixes existing.  To solve this problem RFC 3849 defines the prefix 2001:db8::/32 as reserved for use in documentation.  This is the prefix I’ll be using in the rest of this blog post.

Manual Addressing

Allocating an IPv6 address manually to an interface is very simple:

[root@alpha ~]# ip addr add dev enp0s3 2001:db8::1/64
[root@alpha ~]# ip addr show dev enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:11:14:ff brd ff:ff:ff:ff:ff:ff
 inet6 2001:db8::1/64 scope global 
 valid_lft forever preferred_lft forever
 inet6 fe80::1/64 scope link 
 valid_lft forever preferred_lft forever

With the address assigned I can now ping it:

[root@alpha ~]# ping6 2001:db8::1
PING 2001:db8::1(2001:db8::1) 56 data bytes
64 bytes from 2001:db8::1: icmp_seq=1 ttl=64 time=0.017 ms
64 bytes from 2001:db8::1: icmp_seq=2 ttl=64 time=0.033 ms
64 bytes from 2001:db8::1: icmp_seq=3 ttl=64 time=0.038 ms
^C
--- 2001:db8::1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.017/0.029/0.038/0.010 ms

Note that I don’t need to provide the -I enp0s3 argument, nor do I need to provide the %enp0s3 interface suffix to the address.  This is because the 2001:db8::1/64 address has global scope.  Contrast this with the link-local address fe80::1/64:

[root@alpha ~]# ping6 fe80::1
connect: Invalid argument
[root@alpha ~]# ping6 -I enp0s3 fe80::1
PING fe80::1(fe80::1) from fe80::1 enp0s3: 56 data bytes
64 bytes from fe80::1: icmp_seq=1 ttl=64 time=0.017 ms
64 bytes from fe80::1: icmp_seq=2 ttl=64 time=0.030 ms
64 bytes from fe80::1: icmp_seq=3 ttl=64 time=0.032 ms
^C
--- fe80::1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.017/0.026/0.032/0.007 ms

I configured an address (2001:db8::2/64) on the other system which I can also ping, and connect to using SSH:

[root@alpha ~]# ping6 2001:db8::2
PING 2001:db8::2(2001:db8::2) 56 data bytes
64 bytes from 2001:db8::2: icmp_seq=1 ttl=64 time=0.296 ms
64 bytes from 2001:db8::2: icmp_seq=2 ttl=64 time=0.298 ms
64 bytes from 2001:db8::2: icmp_seq=3 ttl=64 time=0.311 ms
^C
--- 2001:db8::2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.296/0.301/0.311/0.021 ms
[root@alpha ~]# ssh 2001:db8::2
The authenticity of host '2001:db8::2 (2001:db8::2)' can't be established.
RSA key fingerprint is 67:d7:20:6e:f2:e9:25:de:d8:4c:e6:8b:e1:71:f9:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '2001:db8::2' (RSA) to the list of known hosts.
root@2001:db8::2's password: 
Last login: Wed Sep 11 11:03:02 2013 from fe80::1%enp0s3
[root@beta ~]# ip addr show dev enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:84:41:4d brd ff:ff:ff:ff:ff:ff
 inet6 2001:db8::2/64 scope global 
 valid_lft forever preferred_lft forever
 inet6 fe80::2/64 scope link 
 valid_lft forever preferred_lft forever

Again note that I don’t need to add the %enp0s3 scope suffix to the ssh command.

Stateless Address Auto-Configuration

Stateless Address Auto-Configuration (SLAAC) is the process by which an IPv6 host can automatically obtain it’s prefix, default gateway, mtu, and other information from a router on the network.  For a Linux system to provide SLAAC services on a network it needs to have the radvd (Router Advertisement Daemon) installed.  I did this on alpha with the command yum install radvd.

Radvd is controlled by the configuration file /etc/radvd.conf.  In this file you specify the interfaces you want radvd to connect to and the options it should use, for example I am using:

interface enp0s3
{
    AdvSendAdvert on;
    prefix 2001:db8::1/64
    {
        AdvOnLink on;
        AdvAutonomous on;
    };
};

Using systemctl I can start radvd:

[root@alpha radvd-1.9.2]# systemctl start radvd.service

I can then connect to the console of beta and bring the Ethernet interface up:

[root@beta ~]# ip link set dev enp0s3 up

And checking the addresses assigned to this interface shows:

[root@beta ~]# ip addr show dev enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:84:41:4d brd ff:ff:ff:ff:ff:ff
 inet6 2001:db8::a00:27ff:fe84:414d/64 scope global dynamic 
    valid_lft 86168sec preferred_lft 14168sec
 inet6 fe80::a00:27ff:fe84:414d/64 scope link 
    valid_lft forever preferred_lft forever

From this I can see that the interface has been assigned a link-local address, fe80::a00:27ff:fe84:414d/64, based on the standard link-local prefix of fe80::/64 and the MAC address of the interface converted to an modified EUI-64 address.  I can also see that the interface has a second address which is based on the prefix, 2001:db8::/64, we supplied to radvd and the MAC address of the interface converted to an modified EUI-64 address.  I can use this address to access beta with ping and ssh:

[root@alpha ~]# ping6 2001:db8::a00:27ff:fe84:414d
PING 2001:db8::a00:27ff:fe84:414d(2001:db8::a00:27ff:fe84:414d) 56 data bytes
64 bytes from 2001:db8::a00:27ff:fe84:414d: icmp_seq=1 ttl=64 time=0.797 ms
64 bytes from 2001:db8::a00:27ff:fe84:414d: icmp_seq=2 ttl=64 time=0.325 ms
64 bytes from 2001:db8::a00:27ff:fe84:414d: icmp_seq=3 ttl=64 time=0.357 ms
^C
--- 2001:db8::a00:27ff:fe84:414d ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.325/0.493/0.797/0.215 ms
[root@alpha ~]# ssh 2001:db8::a00:27ff:fe84:414d
root@2001:db8::a00:27ff:fe84:414d's password: 
Last login: Wed Sep 11 12:30:19 2013 from 2001:db8::a00:27ff:fe11:14ff
[root@beta ~]#

If I connected a third host to this network segment and brought up it’s Ethernet interface it would also automatically obtain an IPv6 address with the 2001:db8::/64 prefix and be able to use this to communicate with other systems.

Next Steps

There’s much more to SLAAC than I’ve described above.  It also interacts with DHCPv6 to configure additional information on systems such as NTP.  I’ll cover these topics in a subsequent blog post.

Updated BT Sport App

A quick update.

On August 31st BT released a new version of their app for iOS devices that fixed the 12 hour/24 hour clock issue I had complained about before.

<sarcasm>That’s not bad, a full month to identify and resolve such a complex issue!</sarcasm>

IPv6 Networking with Fedora

Now that I’ve got a better understanding of the recent changes to the Fedora networking stack I have been experimenting with IPv6 networking.  Over the next few years this is going to be a hot topic as we completely exhaust the IPv4 address space, become more aware of the limitations of NAT based solutions, and the Internet of Things starts to take shape.

To get started I did a minimal install of Fedora 19 into a VirtualBox VM which had a single, host only, network interface.  Once the install was complete the first thing I did was to disable and remove both NetworkManager and Firewalld so that I could control the system manually:

systemctl stop NetworkManager.service
systemctl disable NetworkManager.service
systemctl stop firewalld.service
systemctl disable firewalld.service
yum remove firewalld NetworkManager NetworkManager-glib

So that I got consistent device names for my Ethernet devices I also removed the biosdevname package and rebooted:

yum remove biosdevname
shutdown -r now

Once the system came back up the only network interface that was up and running was the loopback interface which had both an IPv4 and an IPv6 address automatically:

[root@alpha ~]# ip addr show dev lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc no queue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

You can see that device lo has the usual IPv4 address of 127.0.0.1/8 and that there is also an IPv6 address of ::1/128 assigned.  IPv6 addresses are 128 bits in length, usually when written down the addresses are broken down into groups of 4 bits which are represented by a single hexadecimal digit.  For example the four bit value 0000 is 0 and the value 0001 is 1.  Groups of 4 hexadecimal digits, representing 16 bits of the IPv6 address, are separated by colons when written down, for example our loopback address is: 0000:0000:0000:0000:0000:0000:0000:0001.  These 4 digit groups are sometimes referred to as hextets.

As you can see writing addresses in this manner is quite unwieldy so there are two mechanisms that allow us to compress the address.  The first is that within a hextet we can strip all of the leading zeros.  For our loopback address this reduces the last hextet from 0001 to 1, which is a slight improvement.  The second mechanism is that the largest, contiguous, group of hextets with all zeros can be replaced by a double colon (::).  In our loopback example that means we can replace all of the zeros with the double colon to get the final address of ::1.

Note that you can only have one occurrence of :: within an address.  If you want to know more, Wikipedia has some more detail.

With this in mind we can then ping both loopback addresses:

[root@alpha ~]# ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time= 0.035 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time= 0.049 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time= 0.049 ms

--- 127.0.0.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtf min/avg/max/mdev = 0.035/0.044/0.049/0.008 ms
[root@alpha ~]# ping6 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time= 0.051 ms
64 bytes from ::1: icmp_seq=2 ttl=64 time= 0.053 ms
64 bytes from ::1: icmp_seq=3 ttl=64 time= 0.044 ms

--- ::1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtf min/avg/max/mdev = 0.044/0.049/0.053/0.006 ms

Note that for IPv6 you need to use the ping6 command.  It’s a bit unfortunate but only some commands are dual stack, i.e. they take either IPv4 or IPv6 addresses and work as expected – ping isn’t one of these!

The next step is to bring the actual Ethernet interface up and start configuring IPv6 so that we can communicate between hosts and access services.  I can use the ip link show command to list all of the interfaces on my host, in my case I have a single interface called enp0s3 which is currently in the down state. I can bring the interface up with the command ip link set enp0s3 up.

I can then check the state of the interface with the following:

[root@alpha ~]# ip addr show dev enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 disc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:11:14:ff brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:fe11:14ff/64 scope link
       valid_lft forever preferred_lft forever

The interface is up and interestingly it’s been assigned an IPv6 address.  This is one of the nice features of IPv6, every interface has what’s called a link-local address.  The link-local address is only valid on the particular physical network segment that the interface is connected to, packets to and from this address will not be routed.

Link-local addresses can be manually assigned or automatically generated.  In this case the address has been automatically generated.  To generate the address the host used the assigned link-local prefix fe80::/64  and then created an EUI-64 host address from the MAC address of the interface, these were combined to give the address fe80::a00:27ff:fe11:14ff/64.

It is possible to manually configure the link-local address.  It’s a two step process where first you remove the automatically assigned address, then you add the address you want:

[root@alpha ~]# ip addr del dev enp0s3 fe80::a00:27ff:fe11:14ff/64
[root@alpha ~]# ip addr add dev enp0s3 scope link fe80::1/64
[root@alpha ~]# ip addr show dev enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 disc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:11:14:ff brd ff:ff:ff:ff:ff:ff
    inet6 fe80::1/64 scope link
       valid_lft forever preferred_lft forever

Now that the interface has an address I should be able to ping it much like I did with the loopback interface:

[root@alpha ~]# ping6 fe80::1
connect: Invalid argument

The reason that this fails is that the address is link-local, if a host had two interfaces it would be valid for them both to have the same link-local address, so the above command is to ambiguous. The solution is to supply the -I <interface> argument:

[root@alpha ~]# ping6 -I enp0s3 fe80::1
PING fe80::1(fe80::1) from fe80::1 enp0s3: 56 data bytes
64 bytes from fe80::1: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from fe80::1: icmp_seq=2 ttl=64 time=0.057 ms
64 bytes from fe80::1: icmp_seq=3 ttl=64 time=0.047 ms

--- fe80::1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtf min/avg/max/mdev = 0.038/0.047/0.057/0.009 ms

You can also specify  the link/interface by scoping the address.  This is done with by appending a suffix to the address:

[root@alpha ~]# ping6 fe80::1%enp0s3

At this point I’ve got a single host with a single Ethernet interface which has a link-local IPv6 address either manually or automatically configured.  I’ll publish another blog post shortly which will add a second host to the network and hopefully I’ll get them communicating using IPv6.