PNG  IHDR;IDATxܻn0K )(pA 7LeG{ §㻢|ذaÆ 6lذaÆ 6lذaÆ 6lom$^yذag5bÆ 6lذaÆ 6lذa{ 6lذaÆ `}HFkm,mӪôô! x|'ܢ˟;E:9&ᶒ}{v]n&6 h_tڠ͵-ҫZ;Z$.Pkž)!o>}leQfJTu іچ\X=8Rن4`Vwl>nG^is"ms$ui?wbs[m6K4O.4%/bC%t Mז -lG6mrz2s%9s@-k9=)kB5\+͂Zsٲ Rn~GRC wIcIn7jJhۛNCS|j08yiHKֶۛkɈ+;SzL/F*\Ԕ#"5m2[S=gnaPeғL lذaÆ 6l^ḵaÆ 6lذaÆ 6lذa; _ذaÆ 6lذaÆ 6lذaÆ RIENDB` package Paws::Rekognition::DetectLabelsResponse; use Moose; has LabelModelVersion => (is => 'ro', isa => 'Str'); has Labels => (is => 'ro', isa => 'ArrayRef[Paws::Rekognition::Label]'); has OrientationCorrection => (is => 'ro', isa => 'Str'); has _request_id => (is => 'ro', isa => 'Str'); ### main pod documentation begin ### =head1 NAME Paws::Rekognition::DetectLabelsResponse =head1 ATTRIBUTES =head2 LabelModelVersion => Str Version number of the label detection model that was used to detect labels. =head2 Labels => ArrayRef[L] An array of labels for the real-world objects detected. =head2 OrientationCorrection => Str The value of C is always null. If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata. Amazon Rekognition doesnEt perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated. Valid values are: C<"ROTATE_0">, C<"ROTATE_90">, C<"ROTATE_180">, C<"ROTATE_270"> =head2 _request_id => Str =cut 1;