Recently I've started to migrate some of my VPS's to Netcup as I have found their offerings more performant, and less expensive, than other VPS providers.
However, it seems due to the way Netup's IPv6 network is set up causes some issues with FreeBSD, and allegedly other BSDs leading to loss of connectivity via IPv6.

TL;DR:

Using Netcup's IPv6 gateway fe80::1 with FreeBSD results in eventual lost connectivity with IPv6.
Instead of using the link-local address for a gateway, using IPv6 addresses in the same /48 range as your provided IPv6 range, and setting your IPv6 address prefix to /48 instead of /64 seems to work fairly reliably.
For example if Netcup provide you with IPv6 range: 2a03:4000:AAAA:BBBB::/64, you would set this statically in your rc.conf but as a /48, and set 2a03:4000:AAAA::2 and/or 2a03:4000:AAAA::3 as your gateway.

# /etc/rc.conf
#ifconfig_vtnet0_ipv6="inet6 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff/64"
ifconfig_vtnet0_ipv6="inet6 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff/48"
#ipv6_defaultrouter="fe80::1%vtnet0"
ipv6_defaultrouter="2a03:4000:AAAA::2"

add other routes with

route -6 add default 2a03:4000:AAAA:3

I've had most reliability setting both ::2 and ::3 gateways, YMMV.
It appears in each Netcup /48 subnet ::2 and ::3 are gateways.


On with the full story...

Netcup

I have moved to Netcup for a few reasons: 1. expense, 2. experienced better performance than my previous provider, and 3. ability to provide my own ISOs for OS install.
Netcup in my experience are great, I have a few Debian systems with them that have been super reliable.
But, I want to use FreeBSD for my projects, so my new VPSs run FreeBSD.

The Issue

All is well on the IPv4 front, the DHCP client picks up my provided IPv4 address and connectivity is there, great. I still manually set it statically for consistency, but either way it works great.
IPv6 however, different story. FreeBSD doesn't seem to get it's provided IPv6 address via SLAAC, so no address gets associated. That's OK though, I can set it statically in rc.conf. Netcup provide a /64 IPv6 subnet for each VPS, so I can pick any IPv6 in the range they have provided, but they also sent one via email, so I'll just use that for now.

...
ifconfig_vtnet0_ipv6="inet6 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff/64"
...

2a03:4000 is Netcup's network I believe with AAAA defining a certain part of that network.
BBBB is the subnet provided to me with cccc:dddd:eeee:ffff being the range of address I can use in that subnet.

Netcup's IPv6 network is switched, not routed, I'm sure there's a reason for this, and I'm sure it's sensible, but for now the important bit is that the IPv6 gateway should always be fe80::1%vtnet0 (where vtnet0 is the external interface name) rather than another 2a03:4000:AAAA:BBBB IPv6 address. I pop that into my rc.conf as well.

...
ipv6_defaultrouter="fe80::1%vtnet0"
...

Reboot the VPS, and all works well. ping6s work, curls to IPv6 addresses work, incoming IPv6 connections work. Great! Done...

Not Done.

After about 10 minutes of connectivity, IPv6 just drops out. ping6s fail, curls fail, no connectivity on IPv6 from the outside world. Drat.
Reboot the VPS again... oh, it's back. Great. Wait a few minutes, it goes again. Reboot, back, wait, gone. And so on.

Tests and Research

Obviously the first thing I'm thinking is firewall. So I disable pf and reboot and wait again - no difference, IPv6 just stops working after a few minutes.

In my fairly inexperienced IPv6 knowledge (it's not big in the UK... yet), using fe80::1 as a gateway feels weird, so I try a traceroute6 anyaddress.com to see what the first hop on the route is (it must be my actual gateway, right?) and pop that address into my ipv6_defaultrouter= config in rc.conf.
Reboot... no joy, no IPv6 connectivity at all. OK, I set that back.

I browse the netcup forums a bit, find a handful of posts of people experiencing the same issue. Some refer to people leasing an additional IPv6 lease from Netcup and using that instead which allegedly works for them. I did this, it did not work for me - the IPv6 worked, for about 10 minutes again, then dropped off as before.
FreeBSD forums is the same sort of thing, articles from a few years ago.

Both forums seem to have posts saying the issue lies in the fact the IPv6 address and [actual] gateway exist in different subnets. My simple understanding of this is: the gateway may have address 2a03:4000:AAAA::2 which is in a /48 subnet, and my VPS may have IPv6 address 2a03:4000:AAAA:BBBB::1234 which is a /64 subnet. Because they do not exist on the same subnet, something happens at some point to stop them being able to communicate. I believe this "something" is to do with Netcup's switching (rather than routing).

Of the people experiencing this, some have had the IPv6 Connectivity come back and drop off again on its own. I haven't experienced this, but perhaps I'm just impatient.

So, I write a post on the FreeBSD forums with a bit of information of things I've tried asking for help. I link this to a Fediverse post and wait for responses. It doesn't take long.

Switching the subnet

@fab responds with the idea of using a /48 prefix instead of /64. Given what I have learned up to this point, this could make sense - the gateway is in a /48 of the same first 48 bits of my /64 subnet (2a03:4000:AAAA) after all, so why not put my IPv6 in that same subnet. It could cause issues with communication with other servers in different /64 subnets within the same /48 subnet, but let's risk it for now.

Unfortunately, I experience the same issues again here. After about 10 minutes the connectivity drops off. Reboot, OK, wait, gone.

Learn German

Maybe it's because I'm a native English speaker that I didn't do this before, but I thought I'd look more in depth at the Netcup forums and found this thread by a German speaker having the same issues. Fortunately, in browser translate was able to translate it for me and I was able to get the gist of what was going on.
This post in particular was interesting, along with this post by the same author - they suggest two things I've tried before - using a /48 prefix instead of /64, and setting a route for the actual gateway address (not the fe80::1 address), but I didn't try them both at the same time!

Let's try those together!

I'll get my default gateway via traceroute6 again

> traceroute6 netcup.de
traceroute6 to netcup.de (2a03:4000::e01e) from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff, 64 hops max, 28 byte packets
 1  2a03:4000:AAAA::2  0.472 ms  1.058 ms  0.604 ms

Pop that in my rc.conf and also change my prefix to /48 while I'm there.

...
ifconfig_vtnet0_ipv6="inet6 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff/48"
ipv6_defaultrouter="2a03:4000:AAAA::2"
...

Reboot. It's working. Wait. It's still working! Wait a little longer. Still working!
This is progress! ping6s and curls are working great!
Oh, incoming connections aren't so good...

# From another host
> ping6 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff
...
From 2a00:11c0:AA:B::fff icmp_seq=4 Destination unreachable: Address unreachable
From 2a00:11c0:AA:B::fff icmp_seq=5 Destination unreachable: Address unreachable
From 2a00:11c0:AA:B::fff icmp_seq=6 Destination unreachable: Address unreachable

So ping6s go out OK, but nothing is coming in.

Switching, not routing

As previously mentioned, Netcup switches their IPv6 addresses, they don't route them. I'm not going to pretend I know exactly what this means, but it feels to me that it's likely the gateway could "switch" periodically, and in normal circumstances the fe80::1 can be aware of this and cater for it, just not for the BSDs.

I wonder...

Maybe... I could try switching my default route to another gateway IP? This Post on the Netcup thread seems to suggest there could be a number of 2a03:4000:AAAA::X gateways that could be routers...

route -6 add default 2a03:4000:AAAA::3

Meanwhile on another host...

...
From 2a00:11c0:AA:B::fff icmp_seq=4 Destination unreachable: Address unreachable
From 2a00:11c0:AA:B::fff icmp_seq=5 Destination unreachable: Address unreachable
From 2a00:11c0:AA:B::fff icmp_seq=6 Destination unreachable: Address unreachable
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=1308 ttl=48 time=5.62 ms
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=1309 ttl=48 time=4.81 ms
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=1310 ttl=48 time=5.04 ms
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=1311 ttl=48 time=4.80 ms
...

PROGRESS!

All Good things...

At this point I'm excited. I switch my rc.conf to use that different gateway. I haven't rebooted yet but I see no reason to just yet. Right now I'm at about icmp_seq=5000 on my outgoing ping6s and incoming are still successful. But... on my remote server...

64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=4504 ttl=48 time=4.86 ms
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=4505 ttl=48 time=4.93 ms
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=4506 ttl=48 time=4.81 ms
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=4507 ttl=48 time=4.87 ms
From 2a00:11c0:AA:B::fff icmp_seq=508 Destination unreachable: Address unreachable
From 2a00:11c0:AA:B::fff icmp_seq=509 Destination unreachable: Address unreachable
From 2a00:11c0:AA:B::fff icmp_seq=510 Destination unreachable: Address unreachable

Drat!
Outgoing ping6s still seem OK though... let's switch that default route again

route -6 add default 2a03:4000:AAAA::2

and on the other server:

...
From 2a00:11c0:AA:B::fff icmp_seq=4 Destination unreachable: Address unreachable
From 2a00:11c0:AA:B::fff icmp_seq=5 Destination unreachable: Address unreachable
From 2a00:11c0:AA:B::fff icmp_seq=6 Destination unreachable: Address unreachable
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=6323 ttl=48 time=5.42 ms
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=6324 ttl=48 time=4.72 ms
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=6325 ttl=48 time=5.14 ms
64 bytes from 2a03:4000:AAAA:BBBB:cccc:dddd:eeee:ffff: icmp_seq=6326 ttl=48 time=4.94 ms
...

OK, so I can dynamically bring connectivity back if/when it drops. I can probably script this in some way to happen automatically too. Is it perfect? No, but it's something.

Request for Comments

This is where I am right now. I have a sort-of almost stable IPv6 connection that works most of the time, but it's nowhere near ideal. I am continuing my research and I will update this post if/when I discover anything that stablises it completely.
In the meantime, I welcome any comments about things that I could do further to help, or, indeed, comments for reasons why I should not have done any of the above. I'm familiar with networks, but with IPv6 being ignored in the UK my knowledge of it is a little lacking.

If you want to send any comments, please do so either on this FreeBSD Forum thread or on this Fediverse thread.

EDIT: I guess I should have found this post on the FreeBSD forums earlier, which alludes to the same thing.

Updates

After a while of the above, ndp -a is showing something like the below:

Neighbor                             Linklayer Address  Netif Expire    S Flags
fe80::1%vtnet0                       **:**:**:**:**:** vtnet0 20h12m4s  S R
2a03:4000:AAAA::2                    **:**:**:**:**:** vtnet0 23h59m52s S R
2a03:4000:AAAA::3                    **:**:**:**:**:** vtnet0 12s       R R

So perhaps both gateways are in use? That's good I guess.
I've seen, a few times but rarely, my remote ping6s to my Netcup VPS become unreachable but after a shortwhile it starts working again without any interactions. I'm hoping this "self-repair" is something to do with having both routes set up, meaning I don't have to worry about it any more - I just need to set the second gateway up in my rc.conf as well.

Results

I've had the above in place for a number of hours now and I have to say the stability has been reasonable. A handful of timeouts, but they seem to self-heal.