Timeline for improvement?

That’s the plan yes! We don’t expect to go into the weekend with any traffic shaping at all.

1 Like

Absolutely not.

2 Likes

I mean the “plan” was everything back to normal by 6pm today.
The “plan” was speeds restricted to 250mb etc etc.

Speaking from a NOC point of view im guessing we are waiting on a X connect from your core to cf core to be done, im not backing anyone up here but speaking from experience X-connects can be a pain to get done on time because it requires the sign off of two parties and then a datacenter Noc operations to then actually go do it and telehouse work on a priority situation i.e if a other customer requires help because they are fully offline they will take priority over a simple thing like dealing with a X-connect ticket, they also run on a restricted skeleton crew after 7pm so unless it is a time critical task things like x connects wont be done(ive had numerous dealings with telehouse north and equinox)
That being said if this is going to take a few days is there no way of traffic shaping speedtest’s servers (you can get all the sid ips easily) thus should stop some of the slow down people are seeing, i being one of them im currently running on backup connection (£10 a month unlimited 3 sim) as pfsense deems fibre unstable and keeps disabling the port.

5 Likes

The frustration is understandable, i’m writing this from my 5G Failover.

If everyone here was with another ISP you wouldn’t even get as far as being told you were going to be traffic managed, you would simply be told to “go kick rocks” & “shout at our chat bot”…from a person in another country and after approximately 1 hour of hold time.

IMHO, We are almost too spoilt with information and when things change and go wrong(happens very quickly) too many avenues have to be updated in order to keep everyone in the “know” and then speculations start to happen when avenues get driven past.

The only thing I am extremely annoyed with is that if this is simply just a matter of City Fibre “plugging a cable in” why is that “someone” not on hand permanently, you are still a business supplying your customers, surely you have to be treated with a priority? like a business customer on a business line for instance.

3 Likes

Strangely in all the years I was with Virgin Media I’ve not had an issue this significant or one that lasted anywhere near as long as this. In fact over all the ISPs I’ve ever used I cannot recall any situation similar to this.

1 Like

You are aware virgin media is over 20 years old…Yayzi is still a new provider

4 Likes

The fact VMO2 are rated worse in the UK for customer service and reliability makes you a minority i’m afraid.

I’m also confused, because over the years Virgin have had a few NATIONWIDE outages, were you excluded from them? what was the reason for that outage? They also have built a network of redundancy and have a huge pocket book compared to Yayzi, not to mention their age.

When I left Virgin for Yayzi my account was 16 years old, not including the fact I have been with them since the Nynex/Cable & Wireless days, they were abysmal and expensive…I don’t regret leaving them in the slightest.

6 Likes

You think this is bad try regarding someone plugging a cable in try dealing with a tier 1 carrier i. E cogent where as last year my 100gb port couldnt even throughput 12gbps i had to reroute through multiple other carriers and it took them 4 hours to go it should be fine long and short 2 weeks later was still happening during peak times turns out one of the core switches at the pop was oversatuated and took them a month to do something about it (and were not talking £50 a month here were talking £8k a month) welcome to the telco industry what may seem a trivial task to the end user can be a painful task for a provider

4 Likes

I know VM get a bad rep but they were always fine for me - the odd blip sure but nothing like this.

I understand Yayzi are still new - however they’re charging broadly average prices and so I’d expect average performance - I’d certainly expect them to learn from mistakes and not repeat them as they have.

This migration has been terribly planned and executed.

What mistakes have they repeated other the the geo location issue? This falls on the people who leased the IP’s anyway and Yayzi has stated that they will be “getting their money back” as the ranges are unusable.

The migration I imagine is a first for them and dealing with idiots like City Fibre i’m surprised they haven’t thrown in the towel.

I know it’s frustrating but it “gets worse before it gets better” and it can’t really get any worse, other than downtime. The fact they have done everything possible “their end” and are now awaiting CF, I honestly don’t know what people can expect them to do?

They are suffering many sleepless nights, time away from family and abuse (from others, not you personally) and that isn’t helping matters.

Having them reply to an email/DM at midnight on a Sunday was a testament to me personally, I know for a FACT I wouldn’t get that kind of dedication from another ISP and I’m sure as shit wouldn’t do that for my work!

3 Likes

The geo-location issue is the one I’m referencing - they’ve made that mistake at least 3 times now, you would think they’d learn to test and wait when they’ve got a new range until it’s not going to cause issues - the latest one showing as Iran was particularly troublesome for people.

They’ve bitten off more than they can chew with this migration and tried to do too much too quickly - I’d love to know why they’ve moved all of their users over en masse in one go. Clearly their planning and execution has fallen down and I’m wondering why they’d take such a risky big bang implementation.

The email just ahead of the migration telling everyone they would be capped to 250mb was bad enough - that should have been flagged way ahead of the migration by a proper planning and scoping exercise. That speed would have been bad enough but in actual fact people ended up on 2mb which was clearly ridiculous when they’re paying for circa 1gb.

1 Like

I’d like to clarify an important point: migrations and network operations are somewhat separate matters.

We successfully migrated several thousand customers in under six days—something that has never been achieved on the CityFibre platform before. Typically, migrations are carried out at a rate of around 200 customers per night.

The migration itself achieved its primary objective and was successful in that regard. However, we underestimated the capacity demands—this was a shared oversight between us and our partners. This is a critical lesson we will absolutely learn from and take forward.

We communicated a timeline based on the best information available at the time. Had we not provided a timeline, we would have faced criticism for a lack of transparency. It’s difficult to meet everyone’s expectations, but we acted in good faith.

The old hardware was not under our ownership, and the support from that particular partner had become increasingly inadequate. Continuing with that setup would have led to irreparable issues. Addressing it proactively was the only viable option.

We also want to touch on the matter of IP addresses. We source IPs from our vendors with the assurance that they are clean, usable, and safe. On receiving them, we carry out our own cleaning processes. However, it’s simply not possible to test the hundreds of millions of individual websites associated with those IPs. That said, we are actively engaged in talks with a company to acquire “virgin” IPs to further enhance our network reliability.

This project required a significant financial investment, funded entirely by us. As a self-funded company, we reinvest the money our customers entrust to us. This migration represents several days of disruption in exchange for a lifetime of improved service: lower latency, better peering, and a future-proof infrastructure. The new hardware is capable of handling up to 7Tbps and allows us to add capacity within seven working days at the push of a button. While we stumbled in some areas, the silver lining is that this upgrade ensures we’ll never have to undertake this kind of work again.

We genuinely appreciate you holding us to account—it’s feedback like this that keeps us striving for improvement. We’ve taken your comments on board. As a small but rapidly growing ISP, we’re committed to learning and ensuring our network supports our growth for years to come.

9 Likes

Typically, migrations are carried out at a rate of around 200 customers per night.

I wonder why typically ISPs go slower with stuff like this instead of doing a Yayzi and biting off more than they can chew?!

You need to do an awful lot of learning from this - you cannot end up in this position again.

1 Like

I would assume other ISPs don’t care about increasing the time to do this by a significant multiplier.

Also, they’ve just said they won’t have to do anything like this again because they now own all the kit.

1 Like

Orrrrrr, an alternative interpretation is other ISPs are more risk averse (rightly) and don’t chuck their entire user base over onto a new implementation in one go because it’s RISKY AF.

They’ve said many things, many of them turned out to be completely wrong…

But if you read between the lines of what was just put the hardware and setup they where previously where leasing and the leasing partner was not supporting it and by the sounds of it above it was showing signs of failing, now if it had failed it would of caused a complete outage (for days) where as rapid migrating to owned hardware and platforms that are fully in control of yayzi to prevent a total outage but a degradation of service is a some what better solution.

Moral of story rapid migrations from platforms happen all the time and sometimes things happen that are out of anyones control or guesswork.
Correct me if ive said anything thats wrong @Yayzi_Staff

3 Likes

You raise a valid point about the typical pace of migrations. The reason ISPs generally take a slower, more measured approach is to minimise risk and ensure a smooth transition for customers. Smaller batch sizes allow potential issues to be identified and resolved without impacting a large number of users.

In our case, we carefully evaluated the situation and determined that accelerating the migration was necessary. The old hardware had reached a stage where it posed significant risks to reliability, and delaying action could have resulted in far greater disruptions for customers. This was not a decision we took lightly, and we carried out the migration with the best information and intentions.

That said, you’re absolutely right—we have a great deal to learn from this experience. While the migration itself achieved its technical objectives, we underestimated the impact on network speeds and capacity demands. Had we been fully aware of these impacts beforehand, we would have sought an alternative solution to better manage the transition.

The infrastructure we’ve implemented now ensures that we’ll never need to carry out such a large-scale migration again. Furthermore, our new setup allows for capacity expansion within days, giving us the flexibility to adapt quickly to future needs.

3 Likes

If customers are unhappy with the quantity of issues over recent months will you release them from their contract?

If a customer has an issue, we would expect them to email us, and we can discuss any remedies.

1 Like