Issues with upgrade from 8.18.1 to 9.0.1

Hello,
I'm trying to upgrade current 8.18.1 to 9.0.1 but facing a lot of issues,
Most of them have compatibility issues preventing upgrade.

For example everytime I'm receiving something like below:

"error.stack_trace":"java.lang.IllegalStateException: The index [.reporting-2021.10.03/7qFTO5dhTDm4fdsinJ1kQw] created in version [7.8.0] with current compatibility version [7.8.0] must be marked as read-only using the setting [index.blocks.write] set to [true] before upgrading to 9.0.1

From my side I used upgrade assistant and founded some errors, but right now I have:

I have 1 critical (marked as read-only) and I tried to launch 'Migrate indices'
I stucked on step kibana:

After restart and trying to upgrade elastic I have for example .kibana-task-manager

I have a question for some help - what preparations should I do before upgrade to version 9 ?
Is it some tool available except migration assistant which provides to me information what should I do before migration ?

Thanks for reaching out, @dominbdg. I have a follow-up question here - could you elaborate on how you are hosting Elastic?

ah sorry,
I have on local server - it is running independently on docker

So this appears to be a very long running cluster, I suspect upgraded a number of times.

And yes (I know you were in another thread) you may need to mark this specific and perhaps others to read-only

It is possible that when you when from 7.x to 8.x that this should been fixed and for some reason it was not so the 8.x to 9.x upgrade assitant is not catching it... just a hypothesis

Can you run the Upgrade Assistant API and share what you see?

Hello Stephen,
Thanks for helping me again.
The thing is that from dev tools upgrade assistant it not working - I don't know why
I'm receiving below error (from dev-tools with elastic account):

{
"error": "no handler found for uri [/api/upgrade_assistant/status?pretty=true] and method [GET]"
}

And yes You right - this ELK cluster has quite a lot upgrades and I stucked with upgrade to 9 version.

Where are you seeing the IllegalStateException? Is that in 8.18 or on a 9.0 node? I would expect a 7.8 index to be fine on 8.18. If you call the deprecation info API from dev tools I would expect to see a critical message about it (which I assume is that 1 critical issue in your image), but I would not expect it to be throwing an IllegalStateException. In theory, you just need to click on that critical issue and choose whether you want to reindex the old index, or flag it as read-only going forward.
There might be more information about the security_exception causing the system index migration failure in the elasticsearch logs on one of your elasticsearch nodes in the cluster.

I don't know why I'm receiving below error (from dev-tools with elastic account):

I think that you are probably hitting elasticsearch (usually port 9200) from dev tools, and that is a kibana (usually port 5601) API. Elasticsearch doesn't know anything about it.

Hello,
Regarding /api/upgrade_assistant - Yes You right, I red it some while after typing in DevLogs

Many Thanks for that with /_migration/deprecations,
I was exoecting exactly such info - because I had something in back in my head that probably I have quite a lot of indexes there but just didn't know how to find them.

As in the output I founded quite a lot of indexes which are very old - both some system and most of them very old which are completely not needed so candidates to remove

Sorry in Kibana - Dev tools you need to run kibana APIs. You have to do this

EDIT / FIXED

GET kbn:/api/upgrade_assistant/status

The kbn: indicates it's a Kibana API and it should work

See here

I have issue on that:

{
"statusCode": 400,
"error": "Bad Request",
"message": "[request query.pretty]: definition for this key is missing"
}

Right now I'm thinking how to solve it,
I thought that issue is on my side with encryption-keys but not

just drop the ?pretty=true

ah, pretty is not working for that ?
Many thanks for that - finally works fine.

I have to ask about one thing:

From the
kbn:/api/upgrade_assistant

I have only information:

{
"readyForUpgrade": false,
"details": "The following issues must be resolved before upgrading: 3 unmigrated system indices, 1 Elasticsearch deprecation issue."
}

Which means to me that I have issue with system indices but I don't know which indices

When I tried to find something using:
GET _migration/deprecations

#! this request accesses system indices: [.apm-agent-configuration-reindexed-for-9, .apm-custom-link-reindexed-for-9, .async-search-reindexed-for-9, .kibana_1-reindexed-for-9, .kibana_7.16.3_001-reindexed-for-9, .kibana_7.17.6_001-reindexed-for-9, .kibana_8.10.1_001, .kibana_8.5.1_001, .kibana_8.7.0_001, .kibana_alerting_cases_8.10.1_001, .kibana_analytics_8.10.1_001, .kibana_entities-definitions-1, .kibana_ingest_8.10.1_001, .kibana_security_session_1-reindexed-for-9, .kibana_security_solution_8.10.1_001, .kibana_task_manager_1-reindexed-for-9, .kibana_task_manager_7.16.3_001-reindexed-for-9, .kibana_task_manager_7.17.6_001, .kibana_task_manager_7.17.6_001-reindexed-for-9, .kibana_task_manager_8.5.1_001, .kibana_task_manager_8.7.0_001, .kibana_usage_counters_8.17.1_001, .reporting-2020.09.27, .reporting-2020.10.04, .reporting-2020.10.11, .reporting-2020.10.18, .reporting-2020.10.25, .reporting-2020.11.01, .reporting-2020.11.08, .reporting-2020.11.15, .reporting-2020.11.22, .reporting-2020.11.29, .reporting-2020.12.06, .reporting-2020.12.13, .reporting-2020.12.27, .reporting-2021.01.03, .reporting-2021.01.10, .reporting-2021.01.17, .reporting-2021.01.24, .reporting-2021.01.31, .reporting-2021.02.07, .reporting-2021.02.14, .reporting-2021.02.21, .reporting-2021.02.28, .reporting-2021.03.07, .reporting-2021.03.14, .reporting-2021.03.21, .reporting-2021.03.28, .reporting-2021.04.04, .reporting-2021.04.11, .reporting-2021.04.18, .reporting-2021.04.25, .reporting-2021.05.02, .reporting-2021.05.09, .reporting-2021.05.16, .reporting-2021.05.23, .reporting-2021.05.30, .reporting-2021.06.06, .reporting-2021.06.13, .reporting-2021.06.20, .reporting-2021.06.27, .reporting-2021.07.04, .reporting-2021.07.11, .reporting-2021.07.18, .reporting-2021.07.25, .reporting-2021.08.01, .reporting-2021.08.08, .reporting-2021.08.15, .reporting-2021.08.22, .reporting-2021.08.29, .reporting-2021.09.05, .reporting-2021.09.12, .reporting-2021.09.19, .reporting-2021.09.26, .reporting-2021.10.03, .reporting-2021.10.10, .reporting-2021.10.17, .reporting-2021.10.24, .reporting-2021.10.31, .reporting-2021.11.07, .reporting-2021.11.14, .reporting-2021.11.21, .reporting-2021.11.28, .reporting-2021.12.05, .reporting-2021.12.12, .reporting-2021.12.19, .reporting-2022-02-27, .reporting-2022-03-06, .reporting-2022-04-10, .reporting-2022-04-17, .reporting-2022-04-24, .reporting-2022-05-01, .reporting-2022-05-08, .reporting-2022-05-15, .reporting-2022-05-22, .reporting-2022-05-29, .reporting-2022-06-05, .reporting-2022-06-12, .reporting-2022-06-19, .reporting-2022-06-26, .reporting-2022-07-03, .reporting-2022-07-10, .reporting-2022-07-17, .reporting-2022-07-24, .reporting-2022-08-07, .reporting-2022-08-14, .reporting-2022-08-28, .reporting-2022-09-04, .reporting-2022-09-11, .reporting-2022-09-18, .reporting-2022-09-25, .reporting-2022-10-02, .reporting-2022-10-16, .reporting-2022-10-23, .reporting-2022-10-30, .reporting-2022-11-06, .reporting-2022-11-13, .reporting-2022-11-20, .reporting-2022-11-27, .reporting-2022-12-04, .reporting-2022-12-11, .reporting-2022-12-18, .reporting-2022.01.02, .reporting-2022.01.09, .reporting-2022.01.16, .reporting-2022.01.23, .reporting-2022.01.30, .reporting-2022.02.06, .reporting-2022.02.13, .reporting-2023-01-01, .reporting-2023-01-08, .reporting-2023-01-15, .reporting-2023-01-22, .reporting-2023-01-29, .reporting-2023-02-05, .reporting-2023-02-12, .reporting-2023-02-19, .reporting-2023-02-26, .reporting-2023-03-05, .reporting-2023-03-12, .reporting-2023-03-19, .reporting-2023-04-02, .reporting-2023-04-09, .reporting-2023-04-16, .reporting-2023-04-23, .reporting-2023-04-30, .reporting-2023-05-07, .reporting-2023-05-14, .reporting-2023-05-21, .reporting-2023-05-28, .reporting-2023-06-04, .reporting-2023-06-11, .reporting-2023-06-18, .reporting-2023-06-25, .reporting-2023-07-02, .reporting-2023-07-09, .reporting-2023-07-16, .reporting-2023-07-23, .reporting-2023-07-30, .reporting-2023-08-06, .reporting-2023-08-13, .reporting-2023-08-20, .reporting-2023-08-27, .reporting-2023-09-03, .reporting-2023-09-10, .reporting-2023-09-17, .reporting-2023-09-24, .reporting-2023-10-01, .reporting-2023-10-08, .reporting-2023-10-15, .reporting-2023-10-22, .reporting-2023-10-29, .reporting-2023-11-05, .reporting-2023-11-12, .reporting-2023-11-19, .reporting-2023-11-26, .reporting-2023-12-03, .reporting-2023-12-10, .reporting-2023-12-17, .reporting-2023-12-31, .reporting-2024-01-07, .reporting-2024-01-14, .reporting-2024-01-21, .reporting-2024-01-28, .reporting-2024-02-04, .reporting-2024-02-11, .reporting-2024-02-18, .reporting-2024-02-25, .reporting-2024-03-03, .reporting-2024-03-10, .reporting-2024-03-17, .reporting-2024-03-24, .reporting-2024-03-31, .reporting-2024-04-07, .reporting-2024-04-14, .reporting-2024-04-21, .reporting-2024-04-28, .reporting-2024-05-05, .reporting-2024-05-19, .reporting-2024-05-26, .reporting-2024-06-02, .reporting-2024-06-09, .reporting-2024-06-16, .reporting-2024-06-23, .reporting-2024-06-30, .reporting-2024-07-07, .reporting-2024-07-14, .reporting-2024-07-21, .reporting-2024-07-28, .reporting-2024-08-04, .reporting-2024-08-11, .reporting-2024-08-18, .security-7, .security-profile-8, .tasks, .transform-internal-007], but in a future major version, direct access to system indices will be prevented by default
{
"cluster_settings": ,
"node_settings": [
{
"level": "warning",
"message": "setting [xpack.monitoring.collection.enabled] is deprecated and will be removed after 8.0",
"url": "Migrating to 7.16 | Elasticsearch Guide [7.16] | Elastic",
"details": "the setting [xpack.monitoring.collection.enabled] is currently set to [true], remove this setting (nodes impacted: [elastic-tst-0])",
"resolve_during_rolling_upgrade": false
}
],
"data_streams": {
".logs-deprecation.elasticsearch-default": [
{
"level": "critical",
"message": "Old data stream with a compatibility version < 8.0",
"url": "Migrating to 8.0 | Elasticsearch Guide [8.18] | Elastic",
"details": "This data stream has backing indices that were created before Elasticsearch 8.0.0",
"resolve_during_rolling_upgrade": false,
"_meta": {
"indices_requiring_upgrade": [
".ds-.logs-deprecation.elasticsearch-default-2023.01.11-000024",
".ds-.logs-deprecation.elasticsearch-default-2022.06.29-000010",
".ds-.logs-deprecation.elasticsearch-default-2022.04.20-000005",
".ds-.logs-deprecation.elasticsearch-default-2022.05.04-000006",
".ds-.logs-deprecation.elasticsearch-default-2022.05.18-000007",
".ds-.logs-deprecation.elasticsearch-default-2022.07.13-000011",
".ds-.logs-deprecation.elasticsearch-default-2022.09.07-000015",
".ds-.logs-deprecation.elasticsearch-default-2022.11.02-000019",
".ds-.logs-deprecation.elasticsearch-default-2022.03.23-000003",
".ds-.logs-deprecation.elasticsearch-default-2022.09.21-000016",
".ds-.logs-deprecation.elasticsearch-default-2022.11.30-000021",
".ds-.logs-deprecation.elasticsearch-default-2022.10.19-000018",
".ds-.logs-deprecation.elasticsearch-default-2022.10.05-000017",
".ds-.logs-deprecation.elasticsearch-default-2022.12.14-000022",
".ds-.logs-deprecation.elasticsearch-default-2022.06.15-000009",
".ds-.logs-deprecation.elasticsearch-default-2022.06.01-000008",
".ds-.logs-deprecation.elasticsearch-default-2022.12.28-000023",
".ds-.logs-deprecation.elasticsearch-default-2022.08.10-000013",
".ds-.logs-deprecation.elasticsearch-default-2022.08.24-000014",
".ds-.logs-deprecation.elasticsearch-default-2022.07.27-000012",
".ds-.logs-deprecation.elasticsearch-default-2022.04.06-000004",
".ds-.logs-deprecation.elasticsearch-default-2022.02.23-000001",
".ds-.logs-deprecation.elasticsearch-default-2022.03.09-000002",
".ds-.logs-deprecation.elasticsearch-default-2022.11.16-000020"
],
"indices_requiring_upgrade_count": 24,
"total_backing_indices": 52,
"reindex_required": true
}
}
]
},
"ml_settings": ,
"templates": {},
"index_settings": {},
"ilm_policies": {}
}

I putted all those as read-only

And I tried again to migrate to version 9 and I have issue with reporting indices:

{"@timestamp":"2025-05-31T20:11:33.006Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"lastic-tst-0","elasticsearch.cluster.name":"tst-cluster","error.type":"java.lang.IllegalStateException","error.message":"The index [.reporting-2022-09-11/5gbLJ932QK279t_B9sAOPg] created in version [7.16.3] with current compatibility version [7.16.3] must be marked as read-only using the setting [index.blocks.write] se to [true] before upgrading to 9.0.1.","error.stack_trace":"java.lang.IllegalStateException: The index [.reporting-2022-09-11/5gbLJ932QK279t_B9sAOPg] created in version [7.16.3] with current compatibility version [7.16.3] must be marked as read-only using the setting [index.blocks.write] set to [true] before upgradingto 9.0.1.\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.cluster.metadata.IndexMetadataVerifier.isReadOnlySupportedVersion(IndexMetadataVerifier.java:180)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.cluster.metadata.IndexMetadataVerifier.checkSupportedVersion(IndexMetadataVerifier.java:126)\n\tat or.elasticsearch.server@9.0.1/org.elasticsearch.cluster.metadata.IndexMetadataVerifier.verifyIndexMetadata(IndexMetadataVerifier.java:98)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.upgradeMetadata(GatewayMetaState.java:298)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gatewy.GatewayMetaState.upgradeMetadataForNode(GatewayMetaState.java:285)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.createOnDiskPersistedState(GatewayMetaState.java:193)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.createPersistedState(GatewayMetaStat.java:147)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.start(GatewayMetaState.java:105)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.node.Node.start(Node.java:315)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.bootstrap.Elasticsearch.start(Elasticsearch.java:648)\tat org.elasticsearch.server@9.0.1/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:445)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:102)\n"}

I'm starting to thinking that those all issues are related with version 7.x which were not correctly upgraded and I should put all of them to read-only or reindex.

How can I find indexes created with this version 7.x ?

Might be other ways, but index version info is in output of a GET to _all/_settings, something like this (replace variables with your own values)

curl -sk -u "${EUSER}:${EPASS}" "https://${EHOST}:${EPORT}" --request-target '_all/_settings?expand_wildcards=all' -X GET | jq -r 'to_entries[] | "\(.value.settings.index.creation_date) \(.value.settings.index.version.created) \(.key)"' | sort -k1nr -k2nr | while read f1 f2 f3 ; do echo $f2 $f1 $(date --utc --iso=seconds -d @$(( $f1  / 1000 )) ) $f3 ; done

(assuming you have curl & jq installed somewhere)

My oldest 5 on my test system are (pipe above long command into tail -5:

8500003 1704375782425 2024-01-04T14:43:02+01:00 .kibana_security_solution_8.11.3_001
8500003 1704375782267 2024-01-04T14:43:02+01:00 .kibana_8.11.3_001
8500003 1704375782167 2024-01-04T14:43:02+01:00 .kibana_analytics_8.11.3_001
8500003 1704375782133 2024-01-04T14:43:02+01:00 .kibana_task_manager_8.11.3_001
8500003 1704375694023 2024-01-04T14:41:34+01:00 .security-7

First column is version number, second is index creation time (unix time since epoch in milliseconds(, third is that time in a more human-readable format, and last column is index-name.

btw, do you really need those sometime-in-2022-or-2023-created indices? If not, you should at least consider to just delete them ?

You pointed interesting thing to me,
yeah, in fact I don't have jq installed because none of package installers are working for me,

In fast when I had elastic 8.x - apt install was working for me but on 9.0.1 I don't know how to install packages because none is working to me

I'm afraid package management issues aren't really anything to do with Elastic (8.x or 9.x). Maybe you can take that up on a more general Linux forum and figure out what to do - you probably should be updating your system regularly for security reasons.

For jq specifically, you could download a binary, or just build it from source.

ok, have the jq - and Your script works fine,

One thing I have that even after I putted those old indices to readonly mode - during upgrade elasticsearch don't see it and sending error to me.

I'm thinking about reindexing, but will be ok to reindex those indices to new name and next delete this new name ?

I'm asking because I have there .security-7 which I cannot delete

I don't know if I can reindex those indices to the same name in desctination

So did the upgrade finish? What is the current state of the cluster?

No don't try / do that manually you should get new security indices with the upgrade

Just leave the old ones in read only

What state is the upgrad in? Show us all those new security indices

I'm receiving below issue:

{"@timestamp":"2025-06-01T16:46:55.289Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"elastic-tst-0","elasticsearch.cluster.name":"tst-cluster","error.type":"java.lang.IllegalStateException","error.message":"The index [.security-7/_OJLwtKtQWy_HQGRNuz2SA] created in version [7.8.0] with current compatibility version [7.8.0] must be marked as read-only using the setting [index.blocks.write] set to [true] before upgrading to 9.0.1.","error.stack_trace":"java.lang.IllegalStateException: The index [.security-7/_OJLwtKtQWy_HQGRNuz2SA] created in version [7.8.0] with current compatibility version [7.8.0] must be marked as read-only using the setting [index.blocks.write] set to [true] before upgrading to 9.0.1.\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.cluster.metadata.IndexMetadataVerifier.isReadOnlySupportedVersion(IndexMetadataVerifier.java:180)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.cluster.metadata.IndexMetadataVerifier.checkSupportedVersion(IndexMetadataVerifier.java:126)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.cluster.metadata.IndexMetadataVerifier.verifyIndexMetadata(IndexMetadataVerifier.java:98)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.upgradeMetadata(GatewayMetaState.java:298)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.upgradeMetadataForNode(GatewayMetaState.java:285)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.createOnDiskPersistedState(GatewayMetaState.java:193)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.createPersistedState(GatewayMetaState.java:147)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.start(GatewayMetaState.java:105)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.node.Node.start(Node.java:315)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.bootstrap.Elasticsearch.start(Elasticsearch.java:648)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:445)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:102)\n"}

And next NettyAllocator just stopped:

{"@timestamp":"2025-06-01T16:45:59.379Z", "log.level": "INFO", "message":"Native controller process has stopped - no new native processes can be started", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"ml-cpp-log-tail-thread","log.logger":"org.elasticsearch.xpack.ml.process.NativeController","elasticsearch.node.name":"elastic-tst-0","elasticsearch.cluster.name":"tst-cluster"}
{"@timestamp":"2025-06-01T16:46:19.996Z", "log.level": "INFO", "message":"stopped", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch-shutdown","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"elastic-tst-0","elasticsearch.cluster.name":"tst-cluster"}

So I'm thinking if reindex will solve that issue

"The index  [.security-7/_OJLwtKtQWy_HQGRNuz2SA]
created in version [7.8.0] with current compatibility version [7.8.0] 
must be marked as read-only using the setting
 [index.blocks.write] set to [true] before upgrading to 9.0.1."

Are you saying you did this and it still will not work?

Whats the exact command you are using to set that index to read-only ? And what is the response ?

1 Like