Skip to content

Conversation

@szechyjs
Copy link

@szechyjs szechyjs commented Oct 30, 2025

Possible fix for #4723. This seems to resolve the problem, however as pointed out in the original issue, we don't know the intent of c8a4a49. Should we actually be waiting for DBCKEND or is DATAEND sufficient on sdmmc_v2/IDMA devices?

Update: this resolves the hanging issue, however it results in CRC errors when compiled in release mode (debug works fine though).

@szechyjs szechyjs changed the title Fix SDMMC on v2 fix: stm32/sdmmc: don't wait for DBCKEND flag on sdmmc_v2 devices Oct 30, 2025
@xoviat
Copy link
Contributor

xoviat commented Oct 31, 2025

Do you know the cause of the CRC errors?

@szechyjs
Copy link
Author

szechyjs commented Nov 2, 2025

Do you know the cause of the CRC errors?

I just pushed what seemed to fix the DCRC issue. There was definitely some kind of race condition/timing issue, as a simple defmt::debug! line in the right place would fix things. But I believe the root issue was that the status register data was stale/incorrect during the poll_fn, as it would have the dcrcfail bit set on first pass, but the interrupt would never fire for it.

The current state of this PR is working pretty reliably for me now in both debug/release mode. I'd appreciate it if some others can test it as well.

let res = poll_fn(|cx| {
T::state().register(cx.waker());
let status = regs.star().read();
let status = T::regs().star().read();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you were to revert this, and then put compiler_fence(Ordering::SeqCst) after registering the waker, is the problem resolved?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried the following and it didn't work. It resulted in the same crc errors.

diff --git a/embassy-stm32/src/sdmmc/mod.rs b/embassy-stm32/src/sdmmc/mod.rs
index b698c475f..e220466c5 100644
--- a/embassy-stm32/src/sdmmc/mod.rs
+++ b/embassy-stm32/src/sdmmc/mod.rs
@@ -5,6 +5,7 @@ use core::default::Default;
 use core::future::poll_fn;
 use core::marker::PhantomData;
 use core::ops::{Deref, DerefMut};
+use core::sync::atomic::{Ordering, compiler_fence};
 use core::task::Poll;

 use embassy_hal_internal::drop::OnDrop;
@@ -1033,9 +1034,12 @@ impl<'d, T: Instance> Sdmmc<'d, T> {
     /// Wait for a previously started datapath transfer to complete from an interrupt.
     #[inline]
     async fn complete_datapath_transfer(block: bool) -> Result<(), Error> {
+        let regs = T::regs();
+
         let res = poll_fn(|cx| {
             T::state().register(cx.waker());
-            let status = T::regs().star().read();
+            compiler_fence(Ordering::SeqCst);
+            let status = regs.star().read();

             if status.dcrcfail() {
                 return Poll::Ready(Err(Error::Crc));

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For what its worth, searching around the codebase, most poll_fn usage calls T::regs() within the wrapped function, rather than using a previously stored variable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried, but could not figure it out either. Add a comment above the line pointing to #4723, and noting that the compiler may not be sufficiently constrained here. Then, I will merge this as an improvement over the current situation, if nothing else comes up.

@xoviat
Copy link
Contributor

xoviat commented Nov 2, 2025

ea77131 should not impact the behavior of the logic, and only works because of compiler implementation. maybe this can be merged, but the issue cannot be closed until the compiler is constrained appropriately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants