Continuous issue in loading data from Exact Online into Power BI

Since a few days, I’m having difficulties to refresh my data model. Normally I get error messages that I can then solve, but the last few days the data load just “hangs”

Please check below Monitoring and provide me with what I can change/update/adjust.

Is this related?

How many Exact Online-companies are there in this selection?

From what moment in UTC-time is this screenshot (how long has passed since 08:41:25.369)?

Can you run the report “Environment analysis” on https://get-my-report.com to assess the number of rows (tab “RowCount”) expected per Exact Online-company and include the division codes and number of rows in your reply?

The problems with incidents on Invantive Cloud is probably not related. These incidents happen with intervals due to some downloads impacting other users. Currently the capacity has been expanded extremely to minimize the impact while we work on further limiting cross-user impact.

There are currently 31 Exact Online Divisions being loaded.
I’m running the “Get-my-report” as we speak, I’ll post results when available.

Screenshot was made at 09:22h (UTC), so about 40 minutes had passed when making the screenshot

2.882.846 rows of data over 31 entities:

Thanks for the information.

In the case of an initial or refresh load (which can occur due to a number of reasons on Exact Online), the expected duration would be approximately one hour. A refresh load can also occur after an initial load.

This time has been exceeded. It looks like a bug in Invantive Cloud that with low frequency is hindering the download. This is a glitch that sometimes occurs. In release 24.0 there is instrumentation added to allow analysis on this intermittent issue. However, it is unsure whether that this is the root cause.

To resolve the incident, the download request has been aborted. This is also possible often through the “Abort Request” button in Invantive Bridge Online Monitoring details.

Please try again.

0HN24UD1TG6EE:00000002 has been running since 15-03-2024 14:11:28.859.
After 2 minutes, counter stays at 110 000 rows retreived.
I’ll let it run for now…

It is now 15:58 (UTC) and no change since 14:13 (UTC)

Please check.

For request 0HN24UD1TG6EE:00000002 there was no relevant information found. The request terminated after 6 hours.

Through another channel we will reach out to acquire delegation.

The following query will be tested for completion:

use all@eol

select Account
,      AmountDC
,      CostCenter
,      CostUnit
,      [Date]
,      Description
,      Division
,      EntryNumber
,      FinancialPeriod
,      FinancialYear
,      GLAccount
,      InvoiceNumber
,      JournalCode
from   TransactionLinesIncremental@eol
where  FinancialYear >= 2022

The expected runtime for real-time retrieval is one hour.

Tests through UniversalSQL

Tested on UniversalSQL editor.

Number of divisions is 31:

select count(*) 
from   systemdivisions@eol

Number of rows is 2.128.834 (864 seconds, initial load):

select count(*) 
from   transactionlinesincremental@eol

Number of rows per division took 45 seconds (incremental load):

select division
,      count(*) 
from   transactionlinesincremental@eol
group
by     division

29 companies, largest company has 692640 rows.

Strange is that 1044242 and 1044251 have no rows in SQL editor and 1046298 has 3803 rows instead of 8913. There are also other differences in number of rows.

Checked using the following queries, but number of rows remains 0, even when using other APIs:

use 1044251@eol

select count(*) from transactionlinesbulk@eol

select count(*) from transactionlines@eol

It seems that the fast approach to count rows in TransactionLines does not work reliable (see Estimate size of an Exact Online company). The approach used by Get My Report as described in the linked article has been tested. However, also in this case the number of rows is 0 for 1044251:

create or replace table countqueries@inmemorystorage
as
select ste.name
,      sdn.label
,      sdn.code
,      replace(ODATA_SERVICE_URL_FOR_SELECT, '{division}', sdn.code) || '/$count'
       url
,      ste.name || ' @ ' || sdn.label
       orig_system_group
from   SystemDivisions@eol sdn
join   SYSTEMTABLES@DataDictionary ste
on     ste.PROVIDER_NAME = 'ExactOnlineAll'
and    ste.CATALOG       = 'ExactOnlineREST'
and    ste.schema        = 'Sync'
and ste.odata_service_url_for_select like '%TransactionLines%'
and code = 1044251

--
-- Send the API requests untranslated as a
-- "Native Scalar Request". Results will be registered in
-- the same table "NativePlatformScalarRequest".
--
-- Use is made of XML-output. JSON can be acquired by
-- setting Content-Type.
--
insert into exactonlinerest..nativeplatformscalarrequests@eol
( url
, orig_system_group
, fail_on_error
)
select url
,      orig_system_group
--
-- Replace by true if you don't have permissions on everything.
-- Tables which the current Exact Online user does not have permissions to 
-- will then be silently skipped.
--
,      false
from   COUNTQUERIES@InMemoryStorage

--
-- Per table and company the number of rows.
--
select orig_system_group
,      RESULT_TEXT
from   exactonlinerest..NATIVEPLATFORMSCALARREQUESTS@eol
where  SUCCESSFUL = 'Y'
and    orig_system_group is not null

Tests on Bridge Online (OData4)

The initial download of https://bridge-online.cloud/acme-exact-online/odata4/ExactOnlineREST.Incremental.TransactionLinesIncremental@eol on Bridge Online was 2,4 GB and completed after 1542 seconds using curl as download tool.

However, a redownload took 48 minutes and wasn’t finished then.

After this (due to forgetting to request compressed downloads), all downloads ended with itgenboe403 on Fair Use.

Switched to using --compressed using curl. Download is now 244 seconds for an update and 19 seconds when from cache.

The failure of the redownload of 48 minutes seems to resemble the problem experienced. The download will be rerun for some time to see whether a stall can be triggered.

Questions

Can you please double check the output of Get My Report whether the 2.882.846 rows are for TransactionLines?

I sent the output of Get My Report by mail.
When I check transaction lines in the Get My Report, your analysis seems to be correct.

It is not yet clear to me if we can attempt a refresh of the data now, could you please confirm?

The problem with the download not ending after 48 minutes could not be reproduced in over 20 tries. Each download took between 19 seconds and 31 seconds when coming from cache, and 197 and 319 seconds when actually updating transaction data across the 31 Exact Online companies.

It is not unlikely that there is indeed a bug present somewhere, but the problem can not be reproduced.

Recommended steps are:

  • Make sure to reset the cache in Bridge Online using the top-right menu.
  • Continue use of the tables.

Please update this topic in case the problem appears again in the next 7 days. In that case make sure to include the Request ID from Invantive Bridge Online Monitoring.

Is there a possibility that we execute a load together from my Power BI file? Because I’ve reset the cache a few times now, I’ve tried splitting the load (financial year per year), …
I don’t know what I can do more…

Should I be able to ping cloud.invantive.bridge normally?

I’m trying to load a table: load started at 10:29 and seems to be stuck at 140 000 records at 10:32 ( 0HN2ANUVUR5P0:00000001)

When I try to ping, I get this result:
image
(I’ve tried several times)

Pinging the forum works just fine:

Ping is generally disabled on all our cloud servers. Sometimes, the monitoring will not load until a license slot is available. This will be fixed in a future release.

The problem with downloads to stall after a significant amount of rows is currently being investigated.

I hope we have a result on this soon. Today, stalls occur at 140 000 records. Last week it was 90 000, every retry, again and again.

Invantive Cloud should indeed not be reachable through ping (blocked for security reasons).

The problem is still occurring for a few users. For some consistently and for some with intervals.

The problem has been analyzed, but the root cause is not yet clear. The team is currently working on finding a solution.

This topic will be updated when there is progress.

The problem occurs between Invantive Bridge Online and the UniversalSQL engine. Therefore, a possible temporary workaround is to copy the to an Azure database or text files, such as using the instructions on Elementary Data Replication Module between Exact Online and Azure SQL Server.

I will try to build a temporary Azure pipeline.
please keep us informed on the regular solution!