c - Why is my input delayed when sent to process via pipes? -


i'm writing program operating systems project meant modem keyboard in type key , outputs fsk-modulated-audio-signal corresponding ascii value of key. how i've set program forks process , execs program called minimodem (see here info). parent set non canonical input mode , gets user input character @ time. each character sent child via pipe. i'll paste code now:

#include <stdlib.h> #include <fcntl.h> #include <errno.h> #include <unistd.h> #include <stdio.h> #include <sys/ioctl.h> #include <string.h> #include <termios.h>  extern char* program_invocation_short_name;  static struct termios old, new; void init_termios(int echo); void reset_termios(void); int main(int argc, char* argv[]) { pid_t pid; int my_pipe[2];  char* baud = "300"; if (argc == 2) {     if(atoi(argv[1]) == 0) {         printf("use: %s [baud]\n",program_invocation_short_name);         return exit_success;     }     baud = argv[1]; }  if (argc > 2) {     printf("too many arguments.\nusage: %s [baud]\n",program_invocation_short_name);     return exit_success; }  if (pipe(my_pipe) == -1) {     fprintf(stderr, "%s: %s",program_invocation_short_name,strerror(errno));     return exit_failure; }  pid = fork(); if (pid < (pid_t) 0) {     fprintf(stderr, "%s: %s",program_invocation_short_name,strerror(errno));     return exit_failure; }else if (pid == (pid_t) 0) {     /***************/     /*child process*/     /***************/     close(my_pipe[1]); /*child doesn't write*/     dup2(my_pipe[0], 0); /*redirect stdin read side of pipe*/     close(my_pipe[0]); /*close read end it's dup'd*/     execl("/usr/local/bin/minimodem","minimodem","--tx", baud,"-q","-a",null);      fprintf(stderr, "%s: %s",program_invocation_short_name,strerror(errno)); }else if (pid > (pid_t) 0) {     /****************/     /*parent process*/     /****************/     char c;     close(my_pipe[0]); /*parent doesn't read*/     init_termios(1);     atexit(reset_termios);      while(1) {         c = getchar();         if (c == 0x03)             break;         if (write(my_pipe[1], &c, 1) == -1) {             fprintf(stderr, "%s: %s",                     program_invocation_short_name, strerror(errno));             return exit_failure;         }     }     close(my_pipe[1]); } return exit_success; }  void init_termios(int echo) {     tcgetattr(0, &old); /*get old terminal i/o settings*/     new = old; /*make new  settings same old settings */     new.c_lflag &= ~icanon;     new.c_lflag &= echo ? echo : ~echo; /*set appropriate echo mode*/     tcsetattr(0, tcsanow, &new); /*use new terminal i/o settings*/ }  void reset_termios(void) {     tcsetattr(0, tcsanow, &old); } 

my problem user input. when typing seems first character gets written , audio generated there's delay rest of characters in buffer generated continuously they're meant to. if there's big enough pause in typing it's start first character typed after break gets generated followed delay , intended functionality. have fingers crossed isn't because minimodem program wasn't written used in way , problem can overcome. if can shed light on matter sooo greatful. thanks.

note: i've tried putting input ring buffer , input being consumed , sent child in seperate thread. nooot better. not sure if noting productive.

minimodem write 0.5 seconds of silence when detects there isn't enough input data keep audio buffer full. attempt make sure whatever audio write comes out continuously. typically case audio drivers or servers (like pulseaudio) audio written in chunks. when write less full chunk of data audio buffer, card produces no sound, since driver or server waiting enough audio data can write full chunk @ once. since data minimodem writes not going match full chunks of audio, have situation last part of audio data won't written. avoid problem, minimodem writes enough silence guarantee last part of audio gets output card.

for example, let's audio driver or server writes in 1000 byte chunks. let's type 1 character, , produces 2500 bytes of audio data. audio driver or server cause 2000 bytes of data played , 500 bytes left in buffer. since modem protocol requires continuous audio, doesn't make sense leave 500 bytes of audio data in buffer. receiver wouldn't see complete character. receiver have assume tha audio garbled , discard character.

so, avoid this, minimodem write 0.5 seconds of silence, thousands of bytes of audio data. guarantee 500 bytes of audio data left in buffer played, since there enough audio data finish out chunk.

now true there silence in buffer, , won't played immediately, that's not big deal, since won't cause corruption. receiver can handle varying amounts of silence between data (note specific how minimodem works, real model generating sound).

minimodem improved having set chunk size manually. way, know how silence needs written flush audio instead of writing arbitrary amount, delay reduced minimum.


Comments

Popular posts from this blog

java.util.scanner - How to read and add only numbers to array from a text file -

rewrite - Trouble with Wordpress multiple custom querystrings -