mm: hugetlb pages should not be reserved by shmat() if SHM_NORESERVE
authorPrakash Sangappa <prakash.sangappa@oracle.com>
Tue, 23 Jan 2024 20:04:42 +0000 (12:04 -0800)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 8 Feb 2024 05:20:32 +0000 (21:20 -0800)
For shared memory of type SHM_HUGETLB, hugetlb pages are reserved in
shmget() call.  If SHM_NORESERVE flags is specified then the hugetlb pages
are not reserved.  However when the shared memory is attached with the
shmat() call the hugetlb pages are getting reserved incorrectly for
SHM_HUGETLB shared memory created with SHM_NORESERVE which is a bug.

-------------------------------
Following test shows the issue.

$cat shmhtb.c

int main()
{
int shmflags = 0660 | IPC_CREAT | SHM_HUGETLB | SHM_NORESERVE;
int shmid;

shmid = shmget(SKEY, SHMSZ, shmflags);
if (shmid < 0)
{
printf("shmat: shmget() failed, %d\n", errno);
return 1;
}
printf("After shmget()\n");
system("cat /proc/meminfo | grep -i hugepages_");

shmat(shmid, NULL, 0);
printf("\nAfter shmat()\n");
system("cat /proc/meminfo | grep -i hugepages_");

shmctl(shmid, IPC_RMID, NULL);
return 0;
}

 #sysctl -w vm.nr_hugepages=20
 #./shmhtb

After shmget()
HugePages_Total:      20
HugePages_Free:       20
HugePages_Rsvd:        0
HugePages_Surp:        0

After shmat()
HugePages_Total:      20
HugePages_Free:       20
HugePages_Rsvd:        5 <--
HugePages_Surp:        0
--------------------------------

Fix is to ensure that hugetlb pages are not reserved for SHM_HUGETLB shared
memory in the shmat() call.

Link: https://lkml.kernel.org/r/1706040282-12388-1-git-send-email-prakash.sangappa@oracle.com
Signed-off-by: Prakash Sangappa <prakash.sangappa@oracle.com>
Acked-by: Muchun Song <muchun.song@linux.dev>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
fs/hugetlbfs/inode.c

index 671664fed3077f794de3a5707bd69b90cb328e78..ee13c2ca8ad29396c038489f51d7c4b682124a77 100644 (file)
@@ -100,6 +100,7 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
        loff_t len, vma_len;
        int ret;
        struct hstate *h = hstate_file(file);
+       vm_flags_t vm_flags;
 
        /*
         * vma address alignment (but not the pgoff alignment) has
@@ -141,10 +142,20 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
        file_accessed(file);
 
        ret = -ENOMEM;
+
+       vm_flags = vma->vm_flags;
+       /*
+        * for SHM_HUGETLB, the pages are reserved in the shmget() call so skip
+        * reserving here. Note: only for SHM hugetlbfs file, the inode
+        * flag S_PRIVATE is set.
+        */
+       if (inode->i_flags & S_PRIVATE)
+               vm_flags |= VM_NORESERVE;
+
        if (!hugetlb_reserve_pages(inode,
                                vma->vm_pgoff >> huge_page_order(h),
                                len >> huge_page_shift(h), vma,
-                               vma->vm_flags))
+                               vm_flags))
                goto out;
 
        ret = 0;