PointerInputScope
-
Cmn
interface PointerInputScope : Density
DragAndDropSourceScope |
This interface is deprecated. Replaced by a callback for obtain a transfer data,start detection is performed by Compose itself |
Receiver scope for Modifier.pointerInput that permits handling pointer input.
Summary
Public functions |
||
|---|---|---|
suspend R |
<R : Any?> awaitPointerEventScope(block: suspend AwaitPointerEventScope.() -> R)Suspend and install a pointer input |
Cmn
|
Public properties |
||
|---|---|---|
open Size |
The additional space applied to each side of the layout area when the layout is smaller than |
Cmn
|
open Boolean |
Intercept pointer input that children receive even if the pointer is out of bounds. |
Cmn
|
IntSize |
The measured size of the pointer input region. |
Cmn
|
ViewConfiguration |
The |
Cmn
|
Extension functions |
||
|---|---|---|
suspend Unit |
PointerInputScope.detectDragGestures(Gesture detector that waits for pointer down and touch slop in any direction and then calls |
Cmn
|
suspend Unit |
PointerInputScope.detectDragGestures(A Gesture detector that waits for pointer down and touch slop in the direction specified by |
Cmn
|
suspend Unit |
PointerInputScope.detectDragGesturesAfterLongPress(Gesture detector that waits for pointer down and long press, after which it calls |
Cmn
|
suspend Unit |
PointerInputScope.detectHorizontalDragGestures(Gesture detector that waits for pointer down and touch slop in the horizontal direction and then calls |
Cmn
|
suspend Unit |
PointerInputScope.detectVerticalDragGestures(Gesture detector that waits for pointer down and touch slop in the vertical direction and then calls |
Cmn
|
suspend Unit |
PointerInputScope.awaitEachGesture(block: suspend AwaitPointerEventScope.() -> Unit)Repeatedly calls |
Cmn
|
suspend Unit |
PointerInputScope.This function is deprecated. Use awaitEachGesture instead. forEachGesture() can drop events between gestures. |
Cmn
|
suspend Unit |
PointerInputScope.detectTapGestures(Detects tap, double-tap, and long press gestures and calls |
Cmn
|
suspend Unit |
PointerInputScope.detectTransformGestures(A gesture detector for rotation, panning, and zoom. |
Cmn
|
Inherited functions |
|||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|||||||||||||||||||||||||||||||||
Inherited properties |
|---|
Public functions
awaitPointerEventScope
suspend fun <R : Any?> awaitPointerEventScope(block: suspend AwaitPointerEventScope.() -> R): R
Suspend and install a pointer input block that can await input events and respond to them immediately. A call to awaitPointerEventScope will resume with block's result after it completes.
More than one awaitPointerEventScope can run concurrently in the same PointerInputScope by using kotlinx.coroutines.launch. blocks are dispatched to in the order in which they were installed.
Public properties
extendedTouchPadding
open val extendedTouchPadding: Size
The additional space applied to each side of the layout area when the layout is smaller than ViewConfiguration.minimumTouchTargetSize.
interceptOutOfBoundsChildEvents
open var interceptOutOfBoundsChildEvents: Boolean
Intercept pointer input that children receive even if the pointer is out of bounds.
If true, and a child has been moved out of this layout and receives an event, this will receive that event. If false, a child receiving pointer input outside of the bounds of this layout will not trigger any events in this.
size
val size: IntSize
The measured size of the pointer input region. Input events will be reported with a coordinate space of (0, 0) to (size.width, size,height) as the input region, with (0, 0) indicating the upper left corner.
viewConfiguration
val viewConfiguration: ViewConfiguration
The ViewConfiguration used to tune gesture detectors.
Extension functions
detectDragGestures
suspend fun PointerInputScope.detectDragGestures(
onDragStart: (Offset) -> Unit = {},
onDragEnd: () -> Unit = {},
onDragCancel: () -> Unit = {},
onDrag: (change: PointerInputChange, dragAmount: Offset) -> Unit
): Unit
Gesture detector that waits for pointer down and touch slop in any direction and then calls onDrag for each drag event. It follows the touch slop detection of awaitTouchSlopOrCancellation but will consume the position change automatically once the touch slop has been crossed. @see detectDragGestures with orientation lock for a fuller set of capabilities.
onDragStart called when the touch slop has been passed and includes an Offset representing the last known pointer position relative to the containing element. The Offset can be outside the actual bounds of the element itself meaning the numbers can be negative or larger than the element bounds if the touch target is smaller than the ViewConfiguration.minimumTouchTargetSize.
onDragEnd is called after all pointers are up and onDragCancel is called if another gesture has consumed pointer input, canceling this gesture.
Example Usage:
import androidx.compose.foundation.background import androidx.compose.foundation.gestures.detectDragGestures import androidx.compose.foundation.gestures.drag import androidx.compose.foundation.layout.Box import androidx.compose.foundation.layout.fillMaxSize import androidx.compose.foundation.layout.height import androidx.compose.foundation.layout.offset import androidx.compose.foundation.layout.size import androidx.compose.foundation.layout.width import androidx.compose.runtime.mutableStateOf import androidx.compose.runtime.remember import androidx.compose.ui.Modifier import androidx.compose.ui.geometry.Offset import androidx.compose.ui.geometry.Size import androidx.compose.ui.graphics.Color import androidx.compose.ui.input.pointer.pointerInput import androidx.compose.ui.layout.onSizeChanged import androidx.compose.ui.unit.IntOffset import androidx.compose.ui.unit.dp import androidx.compose.ui.unit.toSize val offsetX = remember { mutableStateOf(0f) } val offsetY = remember { mutableStateOf(0f) } var size by remember { mutableStateOf(Size.Zero) } Box(Modifier.fillMaxSize().onSizeChanged { size = it.toSize() }) { Box( Modifier.offset { IntOffset(offsetX.value.roundToInt(), offsetY.value.roundToInt()) } .size(50.dp) .background(Color.Blue) .pointerInput(Unit) { detectDragGestures { _, dragAmount -> val original = Offset(offsetX.value, offsetY.value) val summed = original + dragAmount val newValue = Offset( x = summed.x.coerceIn(0f, size.width - 50.dp.toPx()), y = summed.y.coerceIn(0f, size.height - 50.dp.toPx()), ) offsetX.value = newValue.x offsetY.value = newValue.y } } ) }
| See also | |
|---|---|
detectVerticalDragGestures |
|
detectHorizontalDragGestures |
|
detectDragGesturesAfterLongPress |
to detect gestures after long press |
detectDragGestures
suspend fun PointerInputScope.detectDragGestures(
orientationLock: Orientation?,
onDragStart: (down: PointerInputChange, slopTriggerChange: PointerInputChange, overSlopOffset: Offset) -> Unit = { _, _, _ -> },
onDragEnd: (change: PointerInputChange) -> Unit = {},
onDragCancel: () -> Unit = {},
shouldAwaitTouchSlop: () -> Boolean = { true },
onDrag: (change: PointerInputChange, dragAmount: Offset) -> Unit
): Unit
A Gesture detector that waits for pointer down and touch slop in the direction specified by orientationLock and then calls onDrag for each drag event. It follows the touch slop detection of awaitTouchSlopOrCancellation but will consume the position change automatically once the touch slop has been crossed, the amount of drag over the touch slop is reported as the first drag event onDrag after the slop is crossed. If shouldAwaitTouchSlop returns true the touch slop recognition phase will be ignored and the drag gesture will be recognized immediately.The first onDrag in this case will report an Offset.Zero.
onDragStart is called when the touch slop has been passed and includes an Offset representing the last known pointer position relative to the containing element as well as the initial down event that triggered this gesture detection cycle. The Offset can be outside the actual bounds of the element itself meaning the numbers can be negative or larger than the element bounds if the touch target is smaller than the ViewConfiguration.minimumTouchTargetSize.
onDragEnd is called after all pointers are up with the event change of the up event and onDragCancel is called if another gesture has consumed pointer input, canceling this gesture.
import androidx.compose.foundation.background import androidx.compose.foundation.gestures.detectDragGestures import androidx.compose.foundation.gestures.drag import androidx.compose.foundation.layout.Box import androidx.compose.foundation.layout.fillMaxSize import androidx.compose.foundation.layout.height import androidx.compose.foundation.layout.offset import androidx.compose.foundation.layout.size import androidx.compose.foundation.layout.width import androidx.compose.runtime.mutableStateOf import androidx.compose.runtime.remember import androidx.compose.ui.Modifier import androidx.compose.ui.geometry.Offset import androidx.compose.ui.geometry.Size import androidx.compose.ui.graphics.Color import androidx.compose.ui.input.pointer.pointerInput import androidx.compose.ui.layout.onSizeChanged import androidx.compose.ui.unit.IntOffset import androidx.compose.ui.unit.dp import androidx.compose.ui.unit.toSize val offsetX = remember { mutableStateOf(0f) } val offsetY = remember { mutableStateOf(0f) } var size by remember { mutableStateOf(Size.Zero) } Box(Modifier.fillMaxSize().onSizeChanged { size = it.toSize() }) { Box( Modifier.offset { IntOffset(offsetX.value.roundToInt(), offsetY.value.roundToInt()) } .size(50.dp) .background(Color.Blue) .pointerInput(Unit) { detectDragGestures { _, dragAmount -> val original = Offset(offsetX.value, offsetY.value) val summed = original + dragAmount val newValue = Offset( x = summed.x.coerceIn(0f, size.width - 50.dp.toPx()), y = summed.y.coerceIn(0f, size.height - 50.dp.toPx()), ) offsetX.value = newValue.x offsetY.value = newValue.y } } ) }
| Parameters | |
|---|---|
orientationLock: Orientation? |
Optionally locks detection to this orientation, this means, when this is provided, touch slop detection and drag event detection will be conditioned to the given orientation axis. |
onDragStart: (down: PointerInputChange, slopTriggerChange: PointerInputChange, overSlopOffset: Offset) -> Unit = { _, _, _ ->
} |
A lambda to be called when the drag gesture starts, it contains information about the last known |
onDragEnd: (change: PointerInputChange) -> Unit = {} |
A lambda to be called when the gesture ends. It contains information about the up |
onDragCancel: () -> Unit = {} |
A lambda to be called when the gesture is cancelled either by an error or when it was consumed. |
shouldAwaitTouchSlop: () -> Boolean = { true } |
Indicates if touch slop detection should be skipped. |
onDrag: (change: PointerInputChange, dragAmount: Offset) -> Unit |
A lambda to be called for each delta event in the gesture. It contains information about the Example Usage: |
| See also | |
|---|---|
detectVerticalDragGestures |
|
detectHorizontalDragGestures |
|
detectDragGesturesAfterLongPress |
to detect gestures after long press |
detectDragGesturesAfterLongPress
suspend fun PointerInputScope.detectDragGesturesAfterLongPress(
onDragStart: (Offset) -> Unit = {},
onDragEnd: () -> Unit = {},
onDragCancel: () -> Unit = {},
onDrag: (change: PointerInputChange, dragAmount: Offset) -> Unit
): Unit
Gesture detector that waits for pointer down and long press, after which it calls onDrag for each drag event.
onDragStart called when a long press is detected and includes an Offset representing the last known pointer position relative to the containing element. The Offset can be outside the actual bounds of the element itself meaning the numbers can be negative or larger than the element bounds if the touch target is smaller than the ViewConfiguration.minimumTouchTargetSize.
onDragEnd is called after all pointers are up and onDragCancel is called if another gesture has consumed pointer input, canceling this gesture. This function will automatically consume all the position change after the long press.
Example Usage:
import androidx.compose.foundation.background import androidx.compose.foundation.gestures.detectDragGestures import androidx.compose.foundation.gestures.detectDragGesturesAfterLongPress import androidx.compose.foundation.gestures.drag import androidx.compose.foundation.layout.Box import androidx.compose.foundation.layout.fillMaxSize import androidx.compose.foundation.layout.height import androidx.compose.foundation.layout.offset import androidx.compose.foundation.layout.size import androidx.compose.foundation.layout.width import androidx.compose.runtime.mutableStateOf import androidx.compose.runtime.remember import androidx.compose.ui.Modifier import androidx.compose.ui.geometry.Offset import androidx.compose.ui.geometry.Size import androidx.compose.ui.graphics.Color import androidx.compose.ui.input.pointer.pointerInput import androidx.compose.ui.layout.onSizeChanged import androidx.compose.ui.unit.IntOffset import androidx.compose.ui.unit.dp import androidx.compose.ui.unit.toSize val offsetX = remember { mutableStateOf(0f) } val offsetY = remember { mutableStateOf(0f) } var size by remember { mutableStateOf(Size.Zero) } Box(Modifier.fillMaxSize().onSizeChanged { size = it.toSize() }) { Box( Modifier.offset { IntOffset(offsetX.value.roundToInt(), offsetY.value.roundToInt()) } .size(50.dp) .background(Color.Blue) .pointerInput(Unit) { detectDragGesturesAfterLongPress { _, dragAmount -> val original = Offset(offsetX.value, offsetY.value) val summed = original + dragAmount val newValue = Offset( x = summed.x.coerceIn(0f, size.width - 50.dp.toPx()), y = summed.y.coerceIn(0f, size.height - 50.dp.toPx()), ) offsetX.value = newValue.x offsetY.value = newValue.y } } ) }
detectHorizontalDragGestures
suspend fun PointerInputScope.detectHorizontalDragGestures(
onDragStart: (Offset) -> Unit = {},
onDragEnd: () -> Unit = {},
onDragCancel: () -> Unit = {},
onHorizontalDrag: (change: PointerInputChange, dragAmount: Float) -> Unit
): Unit
Gesture detector that waits for pointer down and touch slop in the horizontal direction and then calls onHorizontalDrag for each horizontal drag event. It follows the touch slop detection of awaitHorizontalTouchSlopOrCancellation, but will consume the position change automatically once the touch slop has been crossed.
onDragStart called when the touch slop has been passed and includes an Offset representing the last known pointer position relative to the containing element. The Offset can be outside the actual bounds of the element itself meaning the numbers can be negative or larger than the element bounds if the touch target is smaller than the ViewConfiguration.minimumTouchTargetSize.
onDragEnd is called after all pointers are up and onDragCancel is called if another gesture has consumed pointer input, canceling this gesture.
This gesture detector will coordinate with detectVerticalDragGestures and awaitVerticalTouchSlopOrCancellation to ensure only vertical or horizontal dragging is locked, but not both.
Example Usage:
import androidx.compose.foundation.background import androidx.compose.foundation.gestures.detectHorizontalDragGestures import androidx.compose.foundation.gestures.drag import androidx.compose.foundation.layout.Box import androidx.compose.foundation.layout.fillMaxHeight import androidx.compose.foundation.layout.fillMaxSize import androidx.compose.foundation.layout.offset import androidx.compose.foundation.layout.width import androidx.compose.runtime.mutableStateOf import androidx.compose.runtime.remember import androidx.compose.ui.Modifier import androidx.compose.ui.geometry.Offset import androidx.compose.ui.geometry.Size import androidx.compose.ui.graphics.Color import androidx.compose.ui.input.pointer.pointerInput import androidx.compose.ui.layout.onSizeChanged import androidx.compose.ui.unit.IntOffset import androidx.compose.ui.unit.dp val offsetX = remember { mutableStateOf(0f) } val offsetY = remember { mutableStateOf(0f) } var width by remember { mutableStateOf(0f) } Box(Modifier.fillMaxSize().onSizeChanged { width = it.width.toFloat() }) { Box( Modifier.offset { IntOffset(offsetX.value.roundToInt(), offsetY.value.roundToInt()) } .fillMaxHeight() .width(50.dp) .background(Color.Blue) .pointerInput(Unit) { detectHorizontalDragGestures { _, dragAmount -> val originalX = offsetX.value val newValue = (originalX + dragAmount).coerceIn(0f, width - 50.dp.toPx()) offsetX.value = newValue } } ) }
detectVerticalDragGestures
suspend fun PointerInputScope.detectVerticalDragGestures(
onDragStart: (Offset) -> Unit = {},
onDragEnd: () -> Unit = {},
onDragCancel: () -> Unit = {},
onVerticalDrag: (change: PointerInputChange, dragAmount: Float) -> Unit
): Unit
Gesture detector that waits for pointer down and touch slop in the vertical direction and then calls onVerticalDrag for each vertical drag event. It follows the touch slop detection of awaitVerticalTouchSlopOrCancellation, but will consume the position change automatically once the touch slop has been crossed.
onDragStart called when the touch slop has been passed and includes an Offset representing the last known pointer position relative to the containing element. The Offset can be outside the actual bounds of the element itself meaning the numbers can be negative or larger than the element bounds if the touch target is smaller than the ViewConfiguration.minimumTouchTargetSize.
onDragEnd is called after all pointers are up and onDragCancel is called if another gesture has consumed pointer input, canceling this gesture.
This gesture detector will coordinate with detectHorizontalDragGestures and awaitHorizontalTouchSlopOrCancellation to ensure only vertical or horizontal dragging is locked, but not both.
Example Usage:
import androidx.compose.foundation.background import androidx.compose.foundation.gestures.detectVerticalDragGestures import androidx.compose.foundation.gestures.drag import androidx.compose.foundation.layout.Box import androidx.compose.foundation.layout.fillMaxSize import androidx.compose.foundation.layout.fillMaxWidth import androidx.compose.foundation.layout.height import androidx.compose.foundation.layout.offset import androidx.compose.runtime.mutableStateOf import androidx.compose.runtime.remember import androidx.compose.ui.Modifier import androidx.compose.ui.geometry.Offset import androidx.compose.ui.geometry.Size import androidx.compose.ui.graphics.Color import androidx.compose.ui.input.pointer.pointerInput import androidx.compose.ui.layout.onSizeChanged import androidx.compose.ui.unit.IntOffset import androidx.compose.ui.unit.dp val offsetX = remember { mutableStateOf(0f) } val offsetY = remember { mutableStateOf(0f) } var height by remember { mutableStateOf(0f) } Box(Modifier.fillMaxSize().onSizeChanged { height = it.height.toFloat() }) { Box( Modifier.offset { IntOffset(offsetX.value.roundToInt(), offsetY.value.roundToInt()) } .fillMaxWidth() .height(50.dp) .background(Color.Blue) .pointerInput(Unit) { detectVerticalDragGestures { _, dragAmount -> val originalY = offsetY.value val newValue = (originalY + dragAmount).coerceIn(0f, height - 50.dp.toPx()) offsetY.value = newValue } } ) }
awaitEachGesture
suspend fun PointerInputScope.awaitEachGesture(block: suspend AwaitPointerEventScope.() -> Unit): Unit
Repeatedly calls block to handle gestures. If there is a CancellationException, it will wait until all pointers are raised before another gesture is detected, or it exits if isActive is false.
block is run within PointerInputScope.awaitPointerEventScope and will loop entirely within the AwaitPointerEventScope so events will not be lost between gestures.
forEachGesture
suspend fun PointerInputScope.forEachGesture(block: suspend PointerInputScope.() -> Unit): Unit
Repeatedly calls block to handle gestures. If there is a CancellationException, it will wait until all pointers are raised before another gesture is detected, or it exits if isActive is false.
awaitEachGesture does the same thing without the possibility of missing events between gestures, but also lacks the ability to call arbitrary suspending functions within block.
detectTapGestures
suspend fun PointerInputScope.detectTapGestures(
onDoubleTap: ((Offset) -> Unit)? = null,
onLongPress: ((Offset) -> Unit)? = null,
onPress: suspend PressGestureScope.(Offset) -> Unit = NoPressGesture,
onTap: ((Offset) -> Unit)? = null
): Unit
Detects tap, double-tap, and long press gestures and calls onTap, onDoubleTap, and onLongPress, respectively, when detected. onPress is called when the press is detected and the PressGestureScope.tryAwaitRelease and PressGestureScope.awaitRelease can be used to detect when pointers have released or the gesture was canceled. The first pointer down and final pointer up are consumed, and in the case of long press, all changes after the long press is detected are consumed.
Each function parameter receives an Offset representing the position relative to the containing element. The Offset can be outside the actual bounds of the element itself meaning the numbers can be negative or larger than the element bounds if the touch target is smaller than the ViewConfiguration.minimumTouchTargetSize.
When onDoubleTap is provided, the tap gesture is detected only after the ViewConfiguration.doubleTapMinTimeMillis has passed and onDoubleTap is called if the second tap is started before ViewConfiguration.doubleTapTimeoutMillis. If onDoubleTap is not provided, then onTap is called when the pointer up has been received.
After the initial onPress, if the pointer moves out of the input area, the position change is consumed, or another gesture consumes the down or up events, the gestures are considered canceled. That means onDoubleTap, onLongPress, and onTap will not be called after a gesture has been canceled.
If the first down event is consumed somewhere else, the entire gesture will be skipped, including onPress.
detectTransformGestures
suspend fun PointerInputScope.detectTransformGestures(
panZoomLock: Boolean = false,
onGesture: (centroid: Offset, pan: Offset, zoom: Float, rotation: Float) -> Unit
): Unit
A gesture detector for rotation, panning, and zoom. Once touch slop has been reached, the user can use rotation, panning and zoom gestures. onGesture will be called when any of the rotation, zoom or pan occurs, passing the rotation angle in degrees, zoom in scale factor and pan as an offset in pixels. Each of these changes is a difference between the previous call and the current gesture. This will consume all position changes after touch slop has been reached. onGesture will also provide centroid of all the pointers that are down.
If panZoomLock is true, rotation is allowed only if touch slop is detected for rotation before pan or zoom motions. If not, pan and zoom gestures will be detected, but rotation gestures will not be. If panZoomLock is false, once touch slop is reached, all three gestures are detected.
Example Usage:
import androidx.compose.foundation.background import androidx.compose.foundation.gestures.detectTransformGestures import androidx.compose.foundation.layout.Box import androidx.compose.foundation.layout.fillMaxSize import androidx.compose.foundation.layout.offset import androidx.compose.runtime.mutableStateOf import androidx.compose.runtime.remember import androidx.compose.ui.Modifier import androidx.compose.ui.geometry.Offset import androidx.compose.ui.graphics.Color import androidx.compose.ui.graphics.TransformOrigin import androidx.compose.ui.graphics.graphicsLayer import androidx.compose.ui.input.pointer.pointerInput /** * Rotates the given offset around the origin by the given angle in degrees. * * A positive angle indicates a counterclockwise rotation around the right-handed 2D Cartesian * coordinate system. * * See: [Rotation matrix](https://en.wikipedia.org/wiki/Rotation_matrix) */ fun Offset.rotateBy(angle: Float): Offset { val angleInRadians = angle * (PI / 180) val cos = cos(angleInRadians) val sin = sin(angleInRadians) return Offset((x * cos - y * sin).toFloat(), (x * sin + y * cos).toFloat()) } var offset by remember { mutableStateOf(Offset.Zero) } var zoom by remember { mutableStateOf(1f) } var angle by remember { mutableStateOf(0f) } Box( Modifier.pointerInput(Unit) { detectTransformGestures( onGesture = { centroid, pan, gestureZoom, gestureRotate -> val oldScale = zoom val newScale = zoom * gestureZoom // For natural zooming and rotating, the centroid of the gesture should // be the fixed point where zooming and rotating occurs. // We compute where the centroid was (in the pre-transformed coordinate // space), and then compute where it will be after this delta. // We then compute what the new offset should be to keep the centroid // visually stationary for rotating and zooming, and also apply the pan. offset = (offset + centroid / oldScale).rotateBy(gestureRotate) - (centroid / newScale + pan / oldScale) zoom = newScale angle += gestureRotate } ) } .graphicsLayer { translationX = -offset.x * zoom translationY = -offset.y * zoom scaleX = zoom scaleY = zoom rotationZ = angle transformOrigin = TransformOrigin(0f, 0f) } .background(Color.Blue) .fillMaxSize() )