PNG  IHDR;IDATxܻn0K )(pA 7LeG{ §㻢|ذaÆ 6lذaÆ 6lذaÆ 6lom$^yذag5bÆ 6lذaÆ 6lذa{ 6lذaÆ `}HFkm,mӪôô! x|'ܢ˟;E:9&ᶒ}{v]n&6 h_tڠ͵-ҫZ;Z$.Pkž)!o>}leQfJTu іچ\X=8Rن4`Vwl>nG^is"ms$ui?wbs[m6K4O.4%/bC%t Mז -lG6mrz2s%9s@-k9=)kB5\+͂Zsٲ Rn~GRC wIcIn7jJhۛNCS|j08yiHKֶۛkɈ+;SzL/F*\Ԕ#"5m2[S=gnaPeғL lذaÆ 6l^ḵaÆ 6lذaÆ 6lذa; _ذaÆ 6lذaÆ 6lذaÆ RIENDB` package Paws::Rekognition::RecognizeCelebritiesResponse; use Moose; has CelebrityFaces => (is => 'ro', isa => 'ArrayRef[Paws::Rekognition::Celebrity]'); has OrientationCorrection => (is => 'ro', isa => 'Str'); has UnrecognizedFaces => (is => 'ro', isa => 'ArrayRef[Paws::Rekognition::ComparedFace]'); has _request_id => (is => 'ro', isa => 'Str'); ### main pod documentation begin ### =head1 NAME Paws::Rekognition::RecognizeCelebritiesResponse =head1 ATTRIBUTES =head2 CelebrityFaces => ArrayRef[L] Details about each celebrity found in the image. Amazon Rekognition can detect a maximum of 64 celebrities in an image. =head2 OrientationCorrection => Str The orientation of the input image (counterclockwise direction). If your application displays the image, you can use this value to correct the orientation. The bounding box coordinates returned in C and C represent face locations before the image orientation is corrected. If the input image is in .jpeg format, it might contain exchangeable image (Exif) metadata that includes the image's orientation. If so, and the Exif metadata for the input image populates the orientation field, the value of C is null. The C and C bounding box coordinates represent face locations after Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata. Valid values are: C<"ROTATE_0">, C<"ROTATE_90">, C<"ROTATE_180">, C<"ROTATE_270"> =head2 UnrecognizedFaces => ArrayRef[L] Details about each unrecognized face in the image. =head2 _request_id => Str =cut 1;