Some random thoughts regarding files.
What is the page size of secretmem memory? Sometimes we use huge pages, sometimes we fallback to 4k pages. So I assume huge pages in general?
Unless there is an explicit request for hugetlb I would say the page size is not really important like for any other fds. Huge pages can be used transparently.
If everything is currently allocated/mapped on PTE granularity, then yes I agree. I remember previous versions used to "pool 2MB pages", which might have been problematic (thus, my concerns regarding mmap() etc.). If that part is now gone, good!
What are semantics of MADV()/FALLOCATE() etc on such files?
I would expect the same semantic as regular shmem (memfd_create) except the memory doesn't have _any_ backing storage which makes it unevictable. So the reclaim related madv won't work but there shouldn't be any real reason why e.g. MADV_DONTNEED, WILLNEED, DONT_FORK and others don't work.
Agreed if we don't have hugepage semantics.
Is userfaultfd() properly fenced? Or does it even work (doubt)?
How does it behave if I mmap(FIXED) something in between? In which granularity can I do that (->page-size?)?
Again, nothing really exceptional here. This is a mapping like any other from address space manipulation POV.
Agreed with the PTE mapping approach.
What are other granularity restrictions (->page size)?
Don't want to open a big discussion here, just some random thoughts. Maybe it has all been already figured out and most of the answers above are "Fails with -EINVAL".
I think that the behavior should be really in sync with shmem semantic as much as possible. Most operations should simply work with an aditional direct map manipulation. There is no real reason to be special. Some functionality might be missing, e.g. hugetlb support but that has been traditionally added on top of shmem interface so nothing really new here.
Agreed!