PNG  IHDR;IDATxܻn0K )(pA 7LeG{ §㻢|ذaÆ 6lذaÆ 6lذaÆ 6lom$^yذag5bÆ 6lذaÆ 6lذa{ 6lذaÆ `}HFkm,mӪôô! x|'ܢ˟;E:9&ᶒ}{v]n&6 h_tڠ͵-ҫZ;Z$.Pkž)!o>}leQfJTu іچ\X=8Rن4`Vwl>nG^is"ms$ui?wbs[m6K4O.4%/bC%t Mז -lG6mrz2s%9s@-k9=)kB5\+͂Zsٲ Rn~GRC wIcIn7jJhۛNCS|j08yiHKֶۛkɈ+;SzL/F*\Ԕ#"5m2[S=gnaPeғL lذaÆ 6l^ḵaÆ 6lذaÆ 6lذa; _ذaÆ 6lذaÆ 6lذaÆ RIENDB` package Paws::Rekognition::GetContentModerationResponse; use Moose; has JobStatus => (is => 'ro', isa => 'Str'); has ModerationLabels => (is => 'ro', isa => 'ArrayRef[Paws::Rekognition::ContentModerationDetection]'); has ModerationModelVersion => (is => 'ro', isa => 'Str'); has NextToken => (is => 'ro', isa => 'Str'); has StatusMessage => (is => 'ro', isa => 'Str'); has VideoMetadata => (is => 'ro', isa => 'Paws::Rekognition::VideoMetadata'); has _request_id => (is => 'ro', isa => 'Str'); ### main pod documentation begin ### =head1 NAME Paws::Rekognition::GetContentModerationResponse =head1 ATTRIBUTES =head2 JobStatus => Str The current status of the unsafe content analysis job. Valid values are: C<"IN_PROGRESS">, C<"SUCCEEDED">, C<"FAILED"> =head2 ModerationLabels => ArrayRef[L] The detected unsafe content labels and the time(s) they were detected. =head2 ModerationModelVersion => Str Version number of the moderation detection model that was used to detect unsafe content. =head2 NextToken => Str If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of unsafe content labels. =head2 StatusMessage => Str If the job fails, C provides a descriptive error message. =head2 VideoMetadata => L Information about a video that Amazon Rekognition analyzed. C is returned in every page of paginated responses from C. =head2 _request_id => Str =cut 1;